
Blog by Author: Kabilan
- I’m a Tech blogger, AI Engineer, and Blockchain Enthusiast.
- Has previously written blogs related to AI/ML and Blockchain Technology.
- Works as a Deep Learning Intern at a firm called “The Machine Learning Company”.
Introduction
AI is nowadays seen, heard, and used everywhere in day-to-day life right from helping in problem-solving to providing personalized recommendations based on our interests.
Recently you would have started seeing AI playing a major role in security surveillance too. In this fast-moving tech world, AI is likely to play a major role in the future in almost all the domains, however, the future is unpredictable, so let’s look into the following:
- What is AI ?
- What was the history of AI ?
- How it has evolved currently (Co-Pilot and Codex) and a look into the future of AI !
What is Artificial Intelligence?
Artificial intelligence (AI) is the simulation of human intelligence processes by computer systems.
Artificial intelligence (AI) is the simulation of human intelligence processes by computer systems.
Some applications of AI are:
- Natural Language Processing (NLP),
- Computer Vision (CV),
- Expert Systems,
- Speech Recognition, and
- Object Detection and Classification.
Before the boom of the AI era. Artificial Intelligence still existed as science fiction.
Artificial intelligence (AI) is a set of sciences, theories, and techniques including mathematical logic, statistics, probabilities, computational neurobiology, computer science that aims to imitate the cognitive abilities of a human being.
The Early Idea of machines with human-like intelligence dates back to 1872, in Samuel Butler’s novel Erewhon. The concept of AI has also been a crucial part of some sci-fi movies, for example, Director Ridley Scott gave AI an important role in most of his films like Prometheus, Blade Runner, and the Alien franchise, etc, the most relatable example is Terminator by James Cameron, in that SkyNet is a fictional artificial superintelligence system that plays as an antagonistic force. You might also hear the myth stories of the future controlled by Artificial Intelligence, Who knows? it could even be possible sooner than we think, with robots like Sophia!
The Future is here with Codex and Copilot
1. OpenAI Codex
- OpenAI codex is an Artificial Intelligence-based Natural Language Processing model developed by OpenAI. This helps in generating programming code given the comment about the required problem or a piece of code. GitHub Copilot is also powered by OpenAI. GitHub Copilot is one of the best use cases of OpenAI codex.
- Codex is a descendent of OpenAI’s GPT-3 model, a fine-tuned autoregressive model that can reproduce text more like a human. Upon GPT-3 – Codex is additionally trained with 159 GigaBytes of Python code from 54 million GitHub public repositories.
- To code with OpenAI codex users just need to give the command in English to this API. Codex is then just a complete piece of code for the given command. Currently, the codex is released as an open-sourced API but soon it may become as stated by the OpenAI in their document.
- You can register for codex by joining the waitlist.
2. GitHub Copilot
- Every programmer or developer loves to have a co-programmer to unite and work with, but the unfortunate situation is not all have a perfect pair. To solve this here comes GitHub Copilot. GitHub quoted this as “Your AI pair programmer”.
- GitHub Copilot is a VS Code extension that can autocomplete and synthesize code based on our inputs like comments and function headers. Copilot can work well with the programming languages Python, JavaScript, Go, Ruby and Typescript. More than just suggesting code, GitHub Copilot also analyses and draws content from the code on which the user is working and suggests a piece of code, it also helps in creating test cases for Codes.
- Copilot is also used to write prose, like it suggests upcoming lines based on our previous works. GitHub Copilot’s code sometimes works well but it doesn’t always give more perfect code as it is trained from public repositories mostly it’s not always the repositories that contain well-explained and well-structured code. When It is used in production it might lead to different bugs as it does not always follow the best practices.
- As GitHub Copilot can generate code as well as phrases, sometimes while creating a function it also generates some weird copyright comments like the ones present on the copyrighted codes. GitHub Copilot is a great resource for fast coding, it might have some mistakes but those are solvable by users.
- Copilot is in its first version, its limitations may be rectified in the future version.
- It’s a tool all programmers and developers can keep by their side. Try yourself
Latest Advancements
1. AlphaGo
In 2016, AlphaGo is the first computer program to beat the European Go champion (Fan Hui) and the world champion (Lee Sedol) then itself (AlphaGo Zero). AlphaGo was one of the complex games with 1080 possible outcomes, Go is known as the most challenging game for AI because of its complexity. This was one of the oldest games requiring multiple layers of strategic thinking.
2. OpenAI
OpenAI is an Artificial Intelligence research laboratory of parent company OpenAI Inc. This is also a competitor for DeepMind. The goal of OpenAI is to develop safe and user-friendly AI that benefits humanity.
- In 2015, OpenAI was formed by investors like Elon Musk, Sam Altman with Greg Brockman, Ilya Sutskever, Wojciech Zaremba, and John Schulman; They pledged over a billion USD to the venture.
- OpenAI stated that they are ready to freely collaborate with other researchers and research institutions making its research and patents open to the public.
- In 2016, Their beta version “OpenAI Gym” was released. OpenAI Gym is a platform for research on Reinforcement Learning. OpenAI Gym can only be used with Python language.
- In 2018, Elon musk resigned his board seat and remained as a donor.
- In 2019, OpenAI received a 1 billion USD investment from Microsoft.
- In 2020, OpenAI announced GPT-3, an NLP model trained on trillions of words of different languages.
3. Sophia
- Sophia, a humanoid AI robot developed by Hanson Robotics (Hong-Kong-based company). This is a social robot that uses AI to see people understand conversation and form relationships. This Robot was activated in February 2016 and Made its first public appearance in the mid of March 2016 at the Southwest Festival which happens in Texas, the United States.
- Sophia was the first-ever humanoid robot to get Saudi Arabia citizenship and also the first non-human to get a United Nations Title as it was named as the United Nations Development Programme’s first-ever Innovation Champion. Sophia imitates more human-like behavior compared to other humanoid robots. Sophia’s architecture includes intelligence software designed by Hanson Robotics, a chat system, and an AI system OpenCog designed for general reasoning. Sophia uses speech recognition technology from Alphabet Inc. and its speech synthesis was designed by Cerepoc. Sophia’s AI analyses the conversations and improves itself for future responses.
4. Google Duplex
- In 2018, Google I/0 showed Google Duplex, a voice Assistant using natural conversation technologies. Google Duplex is completely an automated system that can make calls or book appointments for you with a voice more like a human voice than Generated robotic voice.
- Duplex is designed in a way that it can understand more complex sentences and fast speech. Initially, this feature was launched only to Google Pixel devices.
- The core of Google Duplex is an RNN build using TensorFlow Extended. Model for Duplex was trained on a corpus of anonymized phone conversation data. It uses the features from the input data, history of conversation to optimize itself. This allows people to interact with an AI more naturally without a robotic voice.
5. Open-sourcing
- Google’s TensorFlow, a Python framework, was open-sourced in 2015 followed by Facebook’s PyTorch, a Python-based deep learning platform that supports dynamic computation graphs. The competition between TensorFlow and PyTorch gave more contributions to the AI and ML communities by giving more open-sourced updates.
- Many Custom libraries, packages, frameworks, and tools launched, this makes Machine learning to be easily accessible and understandable by all. On the other side, many researchers also open-sourced their work. This led all aspirants to learn it and apply AI in many different fields. The competition platforms like Kaggle are one of the key catalysts for increasing the growth of AI.
Back to the History of AI
In the early days of World War, Germany used a safe encrypted way of sending messages to other German forces. This was called the Enigma Code. Alan Turing, a British Scientist built a machine called the Bombe machine that can decode the Enigma code. This code-solving machine becomes the initial foundation for machine learning.
1. Alan Turing – 1950
In 1950, an English mathematician, computer scientist, and theoretical biologist Alan Turing published A paper – “Computing Machinery and Intelligence”. In this paper, he asked the question “Can machines think?” – This question was very popular during those days.
According to Alan, Rather than trying to determine if a machine is thinking, Turing suggests that we should ask if the machine can win a game, called the “Imitation Game”. This proposed “Imitation Game” was also known as the Turing Test.
2. First Artificial Neural Network – 1951
- In 1943, American Neurophysiologist Warren Sturgis McCulloch, in his paper “A Logical Calculus of the Ideas Immanent in Nervous Activity” stated the initial idea for neural networks. Warren Sturgis tried to demonstrate that a Turing machine can be implemented in a finite network of formal neurons.
- In 1951, Inspired by McCulloch’s paper, Marvin Minsky & his graduate student Dean Edmunds built the first Artificial Neural Network using 3000 vacuum tubes which stimulated 40 neurons. This first neural network machine was known as the SNARC (Stochastic Neural Analog Reinforcement Calculator) which imitated a rat finding its way around the maze. This was considered as one of the first towards building machine learning capabilities.
3. First Machine Learning Program – 1952
In 1952, Arthur Samuel wrote the first Machine learning program for checkers (Game AI), and also he was the first to use the phrase Machine Learning. The IBM computer worked on this algorithm and improved it. The more it played, studying the moves having winning strategies and incorporating those moves into its program. This algorithm uses a minimax strategy for selecting the next move. This ML program remembers all the moves played by it and combines them to find the winning set of moves with the help of a reward function.
4. AI name coined – 1956
The field of AI wasn’t formally founded until 1956, at a conference at Dartmouth College, in New Hampshire, where the term “Artificial Intelligence” was coined by John McCarthy, father of AI. He defines AI as the science and engineering of making intelligent machines. During those days it was also mentioned as computational intelligence, synthetic intelligence, or computational rationality which also make sense for AI. The term artificial intelligence is used to describe a property of machines or programs the intelligence that the system demonstrates.
5. IBM’s Deep blue – 1957
- In 1957, Herbert Simon, an economist, and sociologist prophesied that in a chess game the AI would beat a human in the next 10 years, before it happens AI then entered a first AI winter. After 30 years it was proven to be right. The operation of Deep Blue was based on a systematic brute force algorithm, where all possible moves were evaluated and weighted.
- In 1985, the development of Deep blue was initiated as a ChipTest project at Carnegie Mellon University.
- In 1987, this project was renamed Deep Blue thought Initially named Deep Thought.
- Deep Blue became the first computer to defeat chess grandmaster Garry Kasparov in the first match of a six-game match in 1996. However, Garry Kasporov wins the game 4 – 2. Again Gary Kasporov beat Deep blue for the second time, this time deep blue was heavily upgraded and called deeper blue. This computer defeated (3.5 – 2.5) the reigning world champion in the decider match. Later it was dismantled by IBM.
6. Perceptron – 1957
- In 1957, Perceptron was developed by Frank Rosenblatt at Cornell Aeronautical Laboratory.
- Perceptron was an early artificial neural network enabling pattern recognition based on a two-layer computer learning network. ANN is a machine model that has been inspired by the functioning of the brain.
- Rosenblatt perceptron is a binary single neuron model. Its input integration is implemented by adding weighted inputs to the existing weights. If the output is greater than the threshold. The neuron gives the output as 1 else it’s set to 0.
- Rosenblatt perceptron can solve some linear classification problems.
7. First AI lab
- In 1959, the first AI lab was established at MIT.
- In 2003, it was merged with the MIT Lab for computer science and called CSAIL.
- CSAIL is one of the most important labs in the world. The Lab lays the groundwork for image-guided surgery and NLP-based Web access, developed bacterial robots and behavior-based robots that are used for planetary exploration, military reconnaissance, and consumer devices.
- So far, 10 CSAIL researchers have won the Turing Award, which is also called the Nobel Prize of Computing.
8. ELIZA – 1965
- In 1965, ELIZA was an early Natural language program developed by Joseph Weizenbaum at MIT Artificial Intelligence Lab. This was the first program that was capable of attending the Turing test. This was also the first chatterbot created before the term chatterbot was coined. ELIZA uses pattern matching and substitution methodology to make it look like understanding the conversation. This was also the first time creating the illusion of human-machine interaction.
- ELIZA works by recognizing keywords or phrases and matches with the pre-programmed responses. Thus it creates the illusion of understanding the conversation and responding. However ELIZA is incapable of learning new words only with interaction, the words need to be fed in its script.
- In 1972, ELIZA and another Natural language program PARRY were brought together for computer-only conversation. In that conversation, ELIZA spoke as a doctor and PARRY stimulated as a patient with schizophrenia.
9. AI Winter
- AI Winter was a hard time for AI. AI encountered two major winters during 1974 to 1980 and 1987 to 1993. During this period much major funding was stopped, and the number of AI researches reduced drastically.
- “AI winter was the result of over-hype on this technology and impossible promises by developers and high expectations from clients and end-users, finally, the disappointment led to the emergence of AI winter.” – Still the case in 2021.
- Failure of machine translation led to the start of it. This was research by the US government to automate the translation of Russian documents. The major difficulty in this problem is word-sense disambiguation. This was less accurate, slower than manual translation, and also more expensive. So, the NRC National Research Council decided to cut the funding after spending a huge amount of 20 million dollars. Machine translation was put on a halt.
- Later in the 21st century, many companies have retaken it and received some useful results like Google Translate, Yahoo Babel Fish.
- During the AI winter, many criticized the end of AI, but the interest and Enthusiasm in AI gradually increased in the 1990s and showed a dramatic increase at the beginning of 2012.
10. MYCIN
- In 1975, MYCIN was an AI program developed at Stanford University, designed to assist physicians by recommending treatments for certain infectious diseases.
- The Divisions of Clinical Pharmacology and Infectious Disease collaborated with the members of the Department of Computer Science initiated the development of a computer-based system (termed MYCIN) that is capable of using both clinical data and judgmental decisions.
11. Convolution Neural Networks
- In 1989, Yann Leun (AT&T Bell Labs) applied a backpropagation algorithm to create Convolutional Neural Network (CNN), a multilayer neural network). Convolutional Neural Network is a neural network having one or more convolution layers depending upon its use case. CNN’s are mostly used in pattern recognition and image processing. CNN, also called ConvNets, was first introduced by Yann LeCun, a computer science researcher in the 1980s. Yann LeCun built on the work of Kunihiko Fukushima, a Japanese scientist, invented a basic image recognition neural network called recognition. LeCun’s version of CNN, called as LeNet ( Named after LeCun ), was able to recognize handwritten digits.
- In 1995, Scientist Richard Wallace developed ALICE
( Artificial Linguistic Internet Computer Entity ), a chatbot. Richard was inspired by ELIZA built by Weizenbaum. In addition to ELIZA, ALICE had natural language data collection. ALICE also won the Loebner Prize for three years ( 2000, 2001, and 2004 ). However, ALICE didn’t pass the Turing test.
- In 1998, the ALICE program was rewritten in Java language.
- In 2002, Amazon replaced Human editors with Basic AI systems. This paved the path for others on how AI could be utilized in business.
- In 2010, AI boomed because of the availability of access to massive amounts of data and heavy computing resources. CNN takes a new avatar and flourished in all subdomains of AI. Until the 2010s CNN remained dormant because Training CNN needed lots of computing resources and lots of data to train on.
- In 2012, Alex Krizhevsky designed AlexNet that uses multilayered neural networks with the ImageNet dataset used to create complex convolution neural networks that perform different computer vision tasks.
12. Eugene Goostman, a chatbot
- Eugene Goostman is a chatbot that portrays a 13-year-old Ukrainian boy, developed by programmers – Vladimir Veselov (Russian), Eugene Demchenko (Ukrainian), and Sergey Ulasen (Russian) at Saint Petersburg in 2001. This chatbot also passed the Turing test after a long run. This was portrayed as a 13-year-old as in Veselov’s opinion a thirteen-year-old is “not too old to know everything and not too young to know nothing”, a clever idea to ignore its grammatical mistakes.
- Eugene Goostman competed in several competitions, it also competed in the Loebner Prize and finished second in the years 2005 and 2008. On the hundredth birthday of Alan Turing Eugene Goostman, participated in the largest ever Turing test competition and convinced 29% of the judges that it is a human.
- In 2014, Eugene Goostman captured headlines for convincing 33% of the judges as if he was a real human during a Turing test. Event Organiser Kevin Warwick considered it passed the Turing test. This chatbot fulfills the prediction of Alan Turing as he predicted by 2000, machines would be capable of fooling 30% of human judges after five minutes of questioning. Soon it’s pass was questioned by critics that Eugene Goostman used personality quirks and humor to hide its non-human tendencies and lacks real intelligence.
13. AI Boom (Again) in 2010
AI boomed heavily after 2010. AI enters every field and makes a great impact on them. Andrew Ng, a computer scientist, quoted that, “AI is the new electricity”. The main factors for this boom are:
- Access to massive volumes of data. Thanks to the spread of the internet and IoT devices for generating Data.
- Discovery of Computer Graphics Card Processors (GPU’s) with high efficiency to accelerate the calculation of learning algorithms.
During the previous decade from 2010 – 2019, Many new AI-based companies emerge like
- Deepmind was initially founded in 2010 by Demis Hassabis, later acquired by Google for 400 million Euros in 2014.
- OpenAI, founded by Elon Musk, is a non-profitable organization doing lots of research in Deep Reinforcement Learning.
- And many new algorithms, AI frameworks like TensorFlow, imageNet, Facenet, NIEL (Never Ending Image Learner), Transformers, BERT (Bidirectional Encoder Representations from Transformers), etc., were developed, which optimizes the growth of AI.
14. DeepMind
DeepMind is an Artificial intelligence company a subsidiary of Alphabet Inc. DeepMind was started in 2010 by Demis Hassabis, Shane Legg, and Mustafa Suleyman. This start-up begins as Demis Hassabis tries to teach AI technology to play old games with the help of neural networks, as old games are a bit primitive and simple compared to that of this generation of games.
Key Points to note:
- DeepMind uses Reinforcement learning to make AI learn the game and master it.
- The goal of the founders is to create a general-purpose AI that can be useful and effective for almost all use cases.
- This start-up attracts major Venture Capitalist firms like Horizon Ventures and Founders Fund. Many investors like Elon Musk, Peter Thiel also invested in DeepMind.
- Later in 2014, Google Acquired DeepMind for 400 million Euros.
- In that same year, DeepMind also received the “Company of the Year” award from Cambridge Computer Laboratory.
- In 2017, DeepMind created a research team to investigate AI ethics.
15. Siri
In 2011, Apple introduced Siri, It was the world’s first substantial voice-based conversational interface. The introduction of Siri led to the use of AI in virtual personal assistants. Many companies started creating their model of Virtual personal assistants as ALEXA for Amazon in 2014, Cortana for Microsoft, Google Assistants for Android.
16. Watson, “Jeopardy” winner
In 2011, the computer giant’s question-answering system Watson won the quiz show “Jeopardy!” by beating reigning champions, Ken Jennings and Brad Rutter. Watson was a room-sized machine made by IBM named after Thomas J Watson. IBM also made Watson a commercial product after its success.
17. Google X
In 2012, Google X (Google’s search lab) will be able to have an AI recognize cats on a video. Google’s Jeff Dean & Andrew Ng trained a neural network that has More than 16,000 processors to detect cats and dogs using this algorithm. Later when Alphabet Inc. was created Google X was renamed as X. This lab is working on many interesting and futuristic projects.
Some of the exciting works at this lab are:
- Google Glass – A research and development program to develop an augmented reality head-mounted display (HMD).
- Waymo – Driverless car. After the success of this project, it emerged as a new company under Alphabet Inc.
- Google Brain – A deep learning research project. This project was considered one of the biggest successes under Google.
- Google’s driverless – In 2014, Google’s driverless car passed Nevada’s self-driving test.
Conclusion
Now, AI is touching new heights every day with new applications in different domains. The expectations of AI are also increasing day by day. The Technology once seemed it was over, has boomed again. For this advancement of AI major credits goes to the scientists who worked on it earlier. For every futuristic solution, we need to look up the past. Artificial intelligence acts as the main driver of emerging technologies like robotics, big data, and Internet of Things and it will continue to act as a technological innovator for the foreseeable future. I have taken you through a historic journey of AI. Hope you enjoyed it and got some new insights through this blog.
If you liked this Blog, leave your thoughts and feedback in the comments section, See you again in the next interesting read!
Happy Learning!
Until Next Time, Take care!
– By Kabilan
Checkout this Blog on “8 Most Popular Types of Activation Functions in Neural Networks” here !
Nice Blog 🔥
Continue this as a series of blog posts
Well written artcile. its complete needed info and base to start your AI journey
Pretty much detailed and well organized 👌
Very insightful Mr.Kabilan
Nice keep going
It was very Useful and interesting info kabilan
On a scale from 1 to 10, This blog is an 11
It was amazing ,and very useful,I really enjoyed reading it
Amazing article mate! Very informative!
Pingback: 8 Types of Activation Functions in Neural Networks
Amazon blog 😍. Thank you, got to know in detail history of AI.
This is great!!!!! Really learnt a lot of interesting stuff from this!
Beautifully written simple to understood.
What a great article I enjoyed reading it