@Those who want to switch to AI: Send you an artificial intelligence guide

Translation | AI Technology Base Camp (rgznai100)
Participation | Peng Shuo

What is artificial intelligence? Why is artificial intelligence important? Should we be afraid of artificial intelligence? Why is everyone suddenly talking about artificial intelligence?

You may know from the Internet how artificial intelligence supports Amazon and Google’s virtual assistants, or how artificial intelligence replaces all work step by step (controversial), but these articles rarely explain what artificial intelligence is ( Or whether the robot is about to take over). This article will explain artificial intelligence, and this concise guide will be updated and improved as the field evolves and important concepts emerge.

What is artificial intelligence?

Artificial intelligence is software or computer programs with learning mechanisms. Artificial intelligence uses this knowledge to make decisions in new situations, just like humans. Researchers building the software try to write code that can read images, text, video or audio, allowing artificial intelligence to learn something from it. Once the machine is learned, knowledge can be used elsewhere. If an algorithm learns to recognize someone’s face, they can find them in Facebook photos. In modern AI, learning is often referred to as “training” (described later).

Humans naturally learn complex ideas: we can see objects like Apple and then identify a different apple in the future. The machine is very literal – the computer has no flexible concept of “similar”. The goal of artificial intelligence is to make machines reduce the concept of just text-based. It is easy for the machine to judge whether the images of two apples or two sentences are identical. But artificial intelligence is designed to recognize the same apple’s picture from different angles or different light; it captures the visual angle to identify the apple. This is called “generalization” or forms an idea based on the similarity of the data, not just the images or texts seen by the AI. More general ideas can be applied to things that AI has not seen before.

Alex Rudnicky, a professor of computer science at Carnegie Mellon University, said: “The goal of artificial intelligence is to reduce complex human behavior to a form that can be calculated. “This in turn makes We are able to build systems that are useful to humans and can engage in complex activities. ”

How far is it from artificial intelligence today?

Artificial intelligence researchers are still working to resolve the basis of this problem. How do we teach computers to recognize what they see in images and videos? After that, identify how to get into the understanding – not only the word “apple”, but also the apple is a kind of food related to oranges and pears, humans can eat apples, can cook with apples, use them to make apple pies, and contact To the story of the Johnny Apple pie, and so on. There is also a problem with understanding language – words have multiple meanings depending on context, definitions are always evolving, and everyone’s claims are a little different. How does the computer understand this unfixed, ever-changing language construct?

Artificial intelligence progresses at different speeds due to different media. Now, we see an amazing increase in the ability to understand images and video, a field that the industry calls computer vision. However, this advancement does not help much in the understanding of other artificial intelligence, an area known as natural language processing. These areas are developing limited intelligence, which means that artificial intelligence has powerful capabilities in processing images, audio or text, but it is impossible to learn the same from all three. An agnostic form of learning is general intelligence, which we see in humans. Many researchers hope that advances in various fields will reveal more about how we share the truth of machine learning and eventually merge into a unified artificial intelligence approach.

Why is artificial intelligence important?

Once artificial intelligence learns how to identify an apple from an image, or to transcribe a speech segment from an audio segment, it can be applied to other software to make decisions that should be made by humans. It can be used to identify and tag your friends on Facebook photos, and you (a person) should have done things by hand. It can identify another car or a street sign from the reversing image of an autonomous car or your car. It can be used to locate inferior agricultural products that should be removed from agricultural production. These tasks are based solely on image recognition and are usually done by the user or the company providing the software.

If a task saves the user’s time, it’s a feature. If it saves the time of the people working in the company or even completely eliminates a job, then it is a huge cost savings. There are applications, such as processing millions of data points in a few minutes of sales analysis, which is impossible without a machine, which means the potential for new information that has never been seen before. These tasks can now be done quickly and cheaply at any time and anywhere by the machine. It is a replica of the tasks that humans have done, and this is an undeniable economic benefit for an infinitely scalable, low-cost workforce.

Jason Hone, a professor at the Human Computer Interaction Lab at Carnegie Mellon University, said that while artificial intelligence can replicate human tasks, it also has the ability to open new labor. “The car is a direct substitute for the horse, but in the medium and long term, it also brings many other uses, such as semi-trucks for large transport, furniture handling trucks, minivans, and folding cars.” Hong said Similarly, artificial intelligence systems will directly replace conventional tasks in the short term, but in the medium and long term we will see the same dramatic use as cars.

Just as Gottlieb Daimler and Carl Benz did not consider how the car would redefine the way the city was built, or the effects of pollution or obesity, we have not seen the long-term effects of this new type of workforce.

Why is AI so hot now, not 30 (or 60) years ago?

Many of the ideas about how artificial intelligence should be learned have actually been more than 60 years old. In the 1950s, researchers like Frank Rosenblatt, Bernard Widrow, and Marcian Hoff first studied how biologists think neurons in the brain work and what they do in mathematics. The idea is that a major equation may not solve all the problems, but what if we use a lot of connected equations like the human brain? The initial example is simple: analyze 1 and 0 over a digital phone line and predict what will happen next. (This study by Widrow and Hoff at Princeton University is still used to reduce the echo of telephone connections).

In 2006, 50 years after the Dartmouth meeting, the parties reunited Dartmouth. From left: Moore, McCarthy, Minsky, Seyfried, Solomonov

For decades, many in the computer science community have argued that this idea will never solve more complex problems, and today it is the foundation for the realization of artificial intelligence by major technology companies, from Google, Amazon to Facebook, to Microsoft. Looking back, researchers now realize that computers are not yet complex enough to simulate the billions of neurons in our brains, and that we need a lot of data to train these neural networks, as we know.

These two factors, computing power and data, have only been realized in the past 10 years. In the mid-2000s, graphics processor unit (GPU) company NVIDIA said that their chips were well suited to running neural networks and began to make AI easier to run on their hardware. Researchers have found that if you can use faster, more complex neural networks, you can improve accuracy.

Then in 2009, artificial intelligence researcher Fei-Fei Li released a database called ImageNet, which contained more than 3 million organized images and added tags to it. She believes that if these algorithms have more examples to find the relationship between patterns, then it can help them understand more complex ideas. She started an ImageNet competition in 2010. By 2012, researcher Geoff Hinton used millions of images to train neural networks, surpassing other applications with a huge advantage of more than 10% accuracy. As Li predicted, data is the key. Hinton also piled up the neural network on top of the other, one just found the shape, the other saw the texture, and so on. These are called deep neural networks, or deep learning, which is what you hear about news in today’s news about artificial intelligence. Once the technology industry sees the results, the prosperity of artificial intelligence begins. For decades, researchers who have been committed to deep learning have become new rock stars in the technology industry. As of 2015, Google has used more than 1,000 projects to use some kind of machine learning technology.

Should we fear artificial intelligence?

After watching a movie like the Terminator, people are easily afraid of the omnipotent evil AI like Skynet. In the field of artificial intelligence research, Skynet is called general super intelligence, or artificial general intelligence, this kind of software is more powerful than human brain in all aspects.

Because computers scale, this means we can make stronger, faster computers and connect them. Fear is that the computing power from the brains of these robots may grow to an unfathomable level, and if they are really smart to that point, they will be out of control and will bypass anyone trying to shut them down. This is the end of the world that the extremely intelligent people like Elon Musk and Stephen Hawking are worried about. As Musk said, although they do have intelligence in some areas, most mainstream artificial intelligence researchers disagree with summoning demons. Although the researchers broke the basic principles of learning, for example, they changed how to understand the meaning behind the model, and then organized these new understandings into a functional worldview, there is no evidence that the computer will have demand, desire or The will to survive, said Yann Lecun, leader of the Facebook Artificial Intelligence Research Center.

“When we are threatened, we are embarrassed, we want to get resources, we like our close relatives more than strangers, and so on, we will become more violent, these are all evolved for our survival. Up, unless we explicitly build these basic behaviors into intelligent machines, they won’t have these behaviors,” he wrote on Quora.

There is no evidence that computers can think of humans as a threat because they do not define such threats to computers. Maybe humans can define it and tell the machine to operate in some parameters that are functionally like a will to survive, and this will not exist.

Wu Enda, the founding member of Google Inc. and former head of artificial intelligence at Baidu, said, “I said, I don’t worry that the reason why artificial intelligence is evil is the same as I don’t worry about the overpopulation on Mars.” But there is a reason for We are afraid of artificial intelligence: human beings.

There is evidence that artificial intelligence is sensitive to human bias in the data it learns. These prejudices may be harmless, such as identifying a cat in a picture is more common than a dog because it is trained by more cat pictures. However, they may also continue stereotypes, such as AI more linking doctors to white males than other genders or races. If an artificial intelligence with this prejudice is responsible for hiring a doctor, it may be unfair to those non-white male employees. A ProPublica survey found that the algorithm used to judge those found guilty was racially biased because it made a tougher sentence against people of color. Health care data usually does not include women, especially pregnant women, so that when medical advice is given to these people, the system function is incomplete. Since these mechanisms were previously made by humans, and now we have a super-powerful machine that is faster, we need to make sure that they can make these decisions fairly and consistently in our ethics.

It is not easy to judge whether an algorithm is biased, because deep learning requires millions of connected calculations, and it is very difficult to calculate their contribution to larger decisions through all these small decisions. So even if we know that artificial intelligence makes a bad decision, we don’t know what it is and how it is done, so it’s hard to establish a mechanism to capture prejudice before it is implemented.

This problem is particularly unstable in areas such as autonomous vehicles. In autonomous cars, every decision can be a matter of life and death. Early research shows that we are very hopeful to reverse the complexity of the machines we create, but it is almost impossible to know why Apple, Google or Microsoft’s artificial intelligence made any decisions.

Functional AI glossary:

algorithm : A set of instructions that a computer must follow. An algorithm can be a simple one-step procedure or a complex neural network, but is usually used to refer to a model.

artificial intelligence : This is a general term. Broadly speaking, software means imitating or replacing all aspects of human intelligence. Artificial intelligence software can learn from data such as images or text, experience, evolution, or other researchers’ inventions.

Computer vision : Artificial intelligence research explores areas of image and video recognition and understanding. This area is from understanding the appearance of Apple, to the functional use of Apple, and the concepts associated with it. It is used as the main technology for auto-driving cars, Google Image Search, and automatic labeling on Facebook.

Deep learning : A neural network is layered to understand the complex patterns and relationships in the data. When the output of one neural network becomes the input of another neural network, they are effectively superimposed, and the resulting neural network is “depth”.

General intelligence : Sometimes referred to as “strong artificial intelligence,” general intelligence will be able to learn and apply different ideas in different tasks.

Generated confrontation network : This is a system with two neural networks, one for generating output and the other for testing whether the quality of this output is the desired output. For example, when trying to generate an image of an apple, the generator will generate an image, and the other (called the discriminator) will not attempt to identify an apple in the image, causing the generator to try again.

Machine learning : Machine Learning (ML) is often combined with the term artificial intelligence and is a convention for learning from data using algorithms.

model : A model is a machine learning algorithm that builds its own understanding of a topic, or its own world model.

Natural language processing : Software for understanding the intent and relationship of ideas in a language.

Neural Networks : An algorithm that builds up the way the brain processes information by connecting a network of mathematical equations. The data provided to the neural network is broken down into smaller chunks and analyzed based on the complexity of the network thousands of times. When the output of one neural network is input to the input of another neural network, the two neural networks are linked together to form a layered, deep neural network. Typically, layers of deep neural networks analyze data from higher and higher abstraction layers, which means that they extract useful data from unnecessary data before getting the simplest and most accurate data representation.

Convolutional neural network : A neural network primarily used to identify and understand image, video, and audio data because it can process dense data, such as millions of pixels of images or thousands of audio file samples.

Recurrent neural network : A neural network for natural language processing that analyzes data periodically and continuously, which means it can process data like words or sentences while maintaining their order and context in the sentence.

Long and short term memory network : A variant of a periodic neural network that is used to preserve structured information based on data. For example, the RNN can identify all nouns and adjectives in a sentence and check if they are used correctly, but the LSTM can remember the plot of a book.

Reinforcement learning : A deep learning algorithm that can be learned from experience. It is an algorithm that can control certain aspects of the environment, such as the role of a video game, and then learn through trial and error and error. Because they are highly reproducible, as a model of the three-dimensional world, and have been played on computers, many of the breakthroughs in reinforcement learning come from algorithms that play video games. In DeepMind’s AlphaGo, RL is one of the main types of machine learning, which defeated world champion Lee Sedol in Go. In the real world, this has been proven in areas such as cybersecurity, where software learns to trick anti-virus software into thinking that malicious files are safe.

Super smart : More artificial intelligence than the human brain. It’s hard to define it because we still can’t objectively measure what the human brain can do.

Supervised learning : In the process of being trained, the data provided to them is the well-organized, already labeled machine learning. If you are building a supervised learning algorithm to identify cats, you can train this algorithm on a picture of 1000 cats.

training : The process of learning algorithms by providing data.

Unsupervised learning A type of machine learning algorithm that does not give any information about how it should classify data and must find the algorithm for the relationship between them. Artificial intelligence researchers like Facebook LeCun see unsupervised learning as the holy grail of artificial intelligence research because it is very similar to the way humans learn naturally. “In unsupervised learning, the brain is much better than our model,” LeCun told IEEE Spectrum. “This means that our artificial learning system lacks some very basic biological learning principles.”