Artificial intelligence makes use of computers and technology to simulate the human mind’s problem-solving and decision-making capabilities.
What is the definition of Artificial Intelligence(AI)?
While there have been numerous definitions of artificial intelligence (AI) throughout the previous few decades, John McCarthy provides the following description in this 2004 study (PDF, 106 KB),
“It is the science and engineering behind the development of intelligent machines, most notably intelligent computer programmers.
It is comparable to the analogous goal of utilizing computers to comprehend human intellect, but AI is not limited to physiologically observable methods.”
However, decades before this description, Alan Turing’s key work, “Computing Machinery and Intelligence” (PDF, 89.8 KB, was published in 1950.
Turing, frequently referred to be the “father of computer science,” poses the following issue in this paper: “Can machines think?”
From there, he proposes a test, now dubbed the “Turing Test,” in which a human interrogator attempts to discern between a computer-generated and a human-generated written response.
While this test has been subjected to considerable examination since its publication, it remains an integral element of the history of artificial intelligence as well as an ongoing philosophical notion due to its use of linguistic concepts.
Stuart Russell and Peter Norvig then published Artificial Intelligence: A Modern Approach, which quickly became one of the main textbooks on the subject.
They go into four distinct AI goals or definitions, distinguishing computer systems based on their logic and ability to think vs. their ability to act:
Humane method:
1.Human-like computer systems
2.Systems that behave similarly to humans
The optimal strategy is as follows:
1.Systems capable of rational thought
2.Systems that make rational decisions
Alan Turing’s notion would have been classified as “systems that behave like people.”
Artificial intelligence, in its simplest form, is a field that combines computer science with large datasets to facilitate problem-solving.
Additionally, it comprises the subfields of machine learning and deep learning, which are typically associated with artificial intelligence.
These fields are comprised of artificial intelligence algorithms aimed at developing expert systems capable of making predictions or classifications based on input data.
Today, there is still a lot of hype surrounding AI development, which is to be anticipated of any new emergent technology.
According to Gartner’s hype cycle, product innovations such as self-driving cars and personal assistants follow “a typical progression of innovation, from initial enthusiasm to disillusionment and finally to an understanding of the innovation’s relevance and role in a market or domain.”
As Lex Fridman observes here in his 2019 MIT speech, we are approaching the zenith of inflated expectations and the trough of disillusionment.
As discussions about the ethics of AI begin to emerge, we can witness the first signs of the trough of disillusionment.
Artificial intelligence classifications—weak AI vs. strong AI
Weak AI, also known as Narrow AI or Artificial Narrow Intelligence (ANI), is artificial intelligence that has been trained and focused on performing specific tasks.
Weak AI is responsible for the majority of the AI that surrounds us today.
‘Narrow’ may be a more true definition for this sort of AI, as it supports some quite strong applications, such as Apple’s Siri, Amazon’s Alexa, IBM Watson, and self-driving cars.
Artificial General Intelligence (AGI) and Artificial Super Intelligence are the two components of strong AI (ASI).
Artificial general intelligence (AGI), or general AI, is a speculative kind of artificial intelligence in which a machine possesses an intelligence equivalent to that of humans; it possesses a self-aware awareness capable of problem solving, learning, and planning for the future.
Artificial Super Intelligence (ASI) — often known as super intelligence — would outperform the human brain’s intelligence and capability.
While strong AI is yet purely theoretical with no practical applications, this does not mean that AI researchers are not investigating its development.
Meanwhile, the best instances of ASI may come from science fiction, such as HAL, 2001: A Space Odyssey’s superhuman, rogue computer aide.
Machine learning vs. deep learning
Because deep learning and machine learning are frequently used interchangeably, it’s important to understand the distinctions between the two.
As previously stated, both deep learning and machine learning are subfields of artificial intelligence; in fact, deep learning is a subfield of machine learning.
A visual representation of the relationship between AI, machine learning, and deep learning
Deep learning is composed of neural networks.
The term “deep” in deep learning refers to a neural network with more than three layers—which includes the inputs and outputs.
This is often depicted by the diagram below:
The distinction between deep learning and machine learning lies in the manner in which each algorithm learns.
Deep learning automates a major portion of the feature extraction process, removing the need for manual human involvement and enabling the usage of bigger data sets.
Consider deep learning to be “scalable machine learning,” as Lex Fridman highlighted in the same MIT presentation mentioned above.
Machine learning that is more conventional, or “non-deep,” is more reliant on human involvement to learn.
Human specialists establish a hierarchy of features to comprehend the distinctions between data inputs, which typically requires more organised data to learn.
While “deep” machine learning can benefit from labelled datasets, commonly known as supervised learning, it does not require a labelled dataset.
It is capable of ingesting unstructured data in its raw form (e.g., text, photos) and automatically determining the hierarchy of features that differentiate distinct types of data.
Unlike machine learning, it does not require human assistance to interpret data, allowing for more innovative approaches to scale machine learning.
Applications of artificial intelligence
Today, AI systems have a plethora of real-world applications.
The following are some of the more frequent examples:
Speech Recognition: is often referred to as automatic voice recognition (ASR), computer speech recognition, or speech-to-text. It is a capability that utilises natural language processing (NLP) to convert human speech to text.
Numerous mobile devices incorporate speech recognition into their systems to enable voice search—for example, Siri—or to increase messaging accessibility.
Customer service: Throughout the customer journey, online chatbots are displacing human agents.
They respond to commonly asked questions (FAQs) regarding issues such as shipping or offer personalised advise, such as cross-selling products or recommending appropriate sizes for users, fundamentally altering how we think about client involvement across websites and social media platforms.
Message bots on e-commerce sites with virtual agents, messaging programmes such as Slack and Facebook Messenger, and duties typically performed by virtual assistants and voice assistants are all examples.
Computer Vision: This artificial intelligence technology enables computers and systems to extract meaningful information from digital photos, videos, and other visual inputs and to take appropriate action based on that information.
This suggestion capability distinguishes it from image recognition tasks.
Computer vision, which is based on convolutional neural networks, has applications in social media photo tagging, radiological imaging in healthcare, and self-driving automobiles in the automotive industry.
Recommendation Engines: By analysing historical data on consumer behaviour, AI algorithms can assist identify data trends that can be leveraged to design more effective cross-selling techniques.
This is utilised by online businesses to give relevant add-on recommendations to customers throughout the checkout process.
Automated stock trading: Designed to optimise stock portfolios, AI-powered high-frequency trading platforms execute hundreds, if not millions, of trades daily without human interaction.
The History of Artificial Intelligence: Significant Dates and Persons
The concept of a ‘thinking machine’ extends back to ancient Greece.
However, significant events and milestones in the evolution of artificial intelligence since the introduction of electronic computing (and concerning several of the subjects mentioned in this article) include the following:
1950: Computing Machinery and Intelligence is published by Alan Turing in Turing—famous for cracking the Nazis’ ENIGMA code during WWII—proposes in the article to address the topic ‘can machines think?’ and introduces the Turing Test to assess whether a computer can display the same intelligence (or the equivalent intelligence) as a person.
Since then, the Turing test’s utility has been contested.
1956: John McCarthy coined the phrase ‘artificial intelligence’ at Dartmouth College’s inaugural AI conference.
(McCarthy would later design the Lisp programming language.)
Later that year, Allen Newell, J.C. Shaw, and Herbert Simon develop the Logic Theorist, the world’s first functioning artificial intelligence computer programme.
Frank Rosenblatt creates the Mark 1 Perceptron, the world’s first computer built on a neural network that ‘learned’ via trial and error.
Only a year later, Marvin Minsky and Seymour Papert publish Perceptrons, which becomes both a seminal work on neural networks and, for a while, an argument against further neural network research.
1980s: Backpropagation neural networks, which train themselves using a backpropagation algorithm, become widely employed in artificial intelligence applications.
1997: IBM’s Deep Blue defeats Garry Kasparov, the global chess champion at the time, in a chess match (and rematch).
2011: IBM Watson defeats Jeopardy! champions Ken Jennings and Brad Rutter
2015: Baidu’s Minwa supercomputer utilises a type of deep neural network called a convolutional neural network to detect and classify images more accurately than the average person.
2016: DeepMind’s AlphaGo programme defeats Lee Sodol, the world champion Go player, in a five-game match powered by a deep neural network.
The victory is important in light of the game’s enormous number of possible plays (nearly 14.5 trillion after only four moves!).
Google later acquired DeepMind for an estimated $400 million.