Research and development of artiﬁcial intelligence in China

This year saw several milestones in the development of artificial intelligence. In March, AlphaGo, a computer algorithm developed by Google’s London-based company, DeepMind, beat the world champion Lee Sedol at Go, an ancient Chinese board game. In October, the same company unveiled in the journal Nature its latest technique that allows a machine to solve tasks that require logic and reasoning, such as finding its way around the London Underground using a map it has never seen before.


THE HUMAN BRAIN AND LEARNING MACHINES
Poo: It's an exciting year for artificial intelligence. I have a basic question. How can we go about it when there is still a long way to go to fully understand the human brain? Zeng: I think "brain inspiration" is an accurate description of artificial intelligence research and one of the most promising approaches that we should have for developing future models. But the human brain is a product of evolution that is not entirely optimized yet. That's why I emphasize we don't really have to copy everything from the brain. We should borrow operational principles of the brain that seems unique and useful to inspire artificial intelligence models to improve their performances and extend their cognitive abilities. Of course, we are only beginning to understand the brain. It may take another couple of centuries to truly understand how it works. But there are things we know quite a bit about, such as the plasticity principle at multiple scales. And such superior features haven't been integrated well into the current artificial-intelligence models and systems. C   Chen: I have a different perspective because my research focuses on hardware. But I agree: the development of artificial intelligence is a long-term endeavour and we should try to incorporate existing knowledge along the way-rather than waiting until we have understood everything about the human brain. It should be application-driven, aiming to solve specific practical issues.

Poo:
If we talk about brain-inspired artificial intelligence, then people have been working on areas such as machine perception since 1940s. But incorporating plasticity is a relatively recent phenomenon. Tan: Indeed. Artificial intelligence is not a new topic. It's very hot now, partly because recent progress in neuroscience. While promising, brain inspiration may not be the only approach for advancing learning machines. Poo: The media also has an important role in capturing the public imagination, as illustrated by AlphaGo's triumph in March. But the incident also illustrates an urgent need in the field-that is, the current machines are not very efficient and require a massive amount of power (very few could afford the type of computing power of AlphaGo). This is especially pertinent when energy is increasingly in short supply. So lots of researchers turn to the human brain, which is very energy efficient. Tan: The public reaction towards machines like AlphaGo is misplaced. It's not that difficult for machines to beat humans in board games. Zha: I agree. Humans are not evolved to play board games or performing super complicated arithmetic tasks. These are not the fundamental aspects of human intelligence. This is a key problem in artificial-intelligence research, which has been largely focusing on developing machines that excel in circumstances with clearly defined rules. Much less attention has been paid to behaviour capabilities in situations with fewer set rules, such as cooking at home or doing real work in the field. Zeng: The current debate is still within the scope of Alan Turing's 1950 seminal article titled 'Computational machinery and intelligence'. He defined three aspects of artificial intelligence. The first is the Turing test-roughly speaking, a test for intelligence in a computer which requires that a human being should be very hard to distinguish the machine from another human being by using the replies to questions put to both. The second is human-machine competition in board games. The third is that machines are able to learn like a child-which is the essence, and the most challenging aspect, of artificial intelligence. Poo: It's commonly thought that massive amounts of data are necessary to train artificial neural networks-an interconnected group of nodes, similar to the vast network of nerve cells in a brain-whereas human brains need much less information to make decisions. I think this is misplaced. In fact, human brains also result from big-data training, which involved continuing changes in the brain network structure. Newborn babies have nerve cells but are not equipped with a fully functional neural network. It is through years of learning that the neural network is modified and fine-tuned, accompanied by significant structural changes. This is why human neural networks are so efficient. In my view, the key to artificial intelligence is to develop artificial neural net-works in ways that their architecture can be changed through learning.

PLASTICITY: THE ESSENCE OF ARTIFICIAL INTELLIGENCE
Tan: There is a lot of emphasis on brain mapping to gain a better insight into how nerve cells are connected within the network. But it's probably more important to understand the mechanisms whereby the network is formed during development. Brain-like in structure may not be brain-like in functionbecause structures are static, whereas learning is a dynamic process. Zha: My research focuses on pattern recognition and computer vision. In my view, a key feature of artificial intelligence is its flexibility. After all, they need to operate in real-word scenarios, so their ability to adapt to a dynamic environment is really important. This self-learning capacity is closely related to brain-like computing and has two important aspects. First, the system has to be plastic, like the human brain. Second, the machine has to be able to interact with its social and natural environment. At the moment, the development of brain-like computing is more about studying the brain structure and imitating a small part of its functions. I think we need to focus more on incorporating plasticity. Poo: I agree. The key lies in the plasticity of connectivity, which is related to feedback and correcting errors during learning and ultimately structural changes. The focus so far has been on computing power and speed, which is not the essence of the human intelligence. Zha: If machines are designed to function only in a fixed environment with fixed rules, then they don't need to change. To have a learning machine that can truly respond to its environment, you'd need to integrate feedback mechanisms throughout the network. Poo: Environmental feedback is related to learning. What are the challenges regarding the shift from supervised learning to unsupervised learning? Chen: Before developing AlphaGo, DeepMind had published a paper in Nature about an algorithm that uses large datasets to teach itself how to play tens of classic video games by looking at the pixels and learning actions that increase the game score. This is an effective approach towards reinforcement learning. But it may be limited to video games and board games, which have simple rules and straightforward goals, and cannot be applied to situations that involve complex environmental input. Another observation I have is that many people are working on sensory artificial intelligence, which is developing very fast, but the cognitive side lags very much behind. Poo: This probably echoes the progress in neuroscience. There have been lots of advances in sensory perception, but we still know very little about higher cognitive processes, such as language and decision-making. Zeng: We see the progress of the DeepMind algorithm on deep reinforcement learning. But we can also see its problems. 540 Natl Sci Rev, 2016, Vol. 3, No. 4 FORUM Although the machine can get feedback through interaction with their environment, the programme cannot transfer what they have learned from one game to another. It has to start from scratch every time it comes across a new game. But humans do not function that way, and can transfer skills learned from one task to new, unrelated tasks. This is the superiority of the human brain. In addition, the model can also be improved from planning perspective. Zha: There should be some conceptual changes when we talk about unsupervised learning. There are a lot of emphases on training efficiencies and the number of hours required in machine learning. In fact, learning is not about efficiency but the interaction with its environment. Efficiency and plasticity are categorically different challenges that require categorically different approaches. Zeng: I think the Holy Grail in artificial intelligence is to develop general intelligent systems that are mechanistically inspired by the brain and behaviourally similar to humans. Truly human-level intelligence systems should be able to process environmental information, define problems, and find solutions on their own. While the hard progress may not only be challenging higher cognitive functions. The key challenge is well articulated in the so-called Moravec's paradox. As Hans Moravec put it: 'It's comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.'

A GOLDEN ERA FOR ARTIFICIAL INTELLIGENCE IN CHINA
Poo: What has China been doing in the field of artificial intelligence? What kind of policy does it have to support its development? Tan: This is a golden era for neuroscience and artificial intelligence in China. The Chinese government takes it very seriously-with President Xi Jinping mentioning on several occasions the importance of the China Brain Project, which is one of the country's major science programmes in the coming decade. It's unprecedented. Zeng: Indeed. There are lots of government supports at various levels. In addition to well publicized national initiatives, several municipalities have programmes to boost the research and development of artificial intelligence. Beijing, for instance, has a project on brain-inspired computing, which involves several CAS institutes, Peking University, Tsinghua University, etc. Along with our industry partners, our institute [the CAS Institute of Automation] has also set up a special venture capital fund totalling 1 billion yuan (US$150 million) dedicated to the development of artificial intelligence and robotics. Poo: It seems that many universities and institutes have programmes on artificial intelligence. Are there differences in their research focus?
Zha: There isn't that much differences in strategic planning between in universities and institutes-which are often brought together through major national projects-though different research groups tend to focus on areas from pattern recognition, hardware to robotics. Chinese researchers keep a close eye on what's going on in the West. When a promising direction emerges, everybody jumps onto it. Poo: The CAS Institute of Computing Technology has some encouraging progress. Could you briefly summarize what's it about? Chen: Our institute mainly focuses on hardware development.
A key accomplishment is the development of cutting-edge chips for processing artificial neural networks. While people have been working on such hardware in the 1980s, the scale has been quite small in simulating nerve cells and synapses [junctions between two nerve cells where impulses pass by diffusion of specialized chemicals]. Now we have developed processors that are limited in size but can simulate unlimited amounts of nerve cells and synapse in a neural network-by modifying a computation technique called virtualization. Poo: How about the CAS Institute of Automation? What have you been up to? Zeng: Brain-inspired intelligence is indeed a focus of our institute. Our long-term goal is to decode the principles and mechanisms of human intelligence and develop brain-inspired intelligent systems with general intelligence. A recent milestone is the development of so-called Parallel Brain Simulator, which gives a preliminary try on simulating the cognitive brain at multiple scales-from ions, nerve cells, neural circuits with various degrees of complexities, brain regions, to cognitive behaviours. We have demonstrated that when some neural principles are incorporated-such as the dynamic allocation of nerve cells, the formation and elimination of synapses, and the appropriate ratio of excitatory and inhibitory nerve cells-the accuracy of our spiking neural networks can be significantly improved. Poo: That's interesting. What has it allowed you to do? Zeng: This has allowed us to preliminarily simulate the mouse brain, including 71 million excitatory and inhibitory spiking neurons, 190 billion synapses, and 213 brain regions. We are also in the process of developing spiking neural network models which are capable of cognitive functions such as pattern recognition, inference and deduction, reinforcement learning, and working memory. The Parallel Brain Simulator is also the 'brain' of a series of cognitive robotics, in which multiple regions of this artificial brain can coordinate with each other to perform various cognitive tasks.
In collaboration with other members of the Centre for Excellence in Brain Science and Intelligence Technology, we have also released 'Linked Brain Data'-a knowledge engine for brain, neuroscience and artificial intelligence research. It is an effort to extract, integrate and analyse knowledge about the brain at multiple scales from neuroscience, psychology and cognitive science research. More specifically, we build a brain association graph, which provide multi-scale and multi-perspective associations of various brain building blocks, cognitive functions and brain diseases.

CHINA'S CHALLENGES
Poo: What are the main challenges? What does China need to do to improve its research output? Tan: Artificial intelligence is related to computer science, neuroscience, cognitive science, and psychology. There should be platforms where scientists from different disciplines can regularly interact and exchange ideas, so latest progress can inform research directions. All communities need to recognize that they can get ideas and inspiration from one another-only then will they have the motivation to get together. Poo: My observation is that we have a big neuroscience community and a big artificial-intelligence community. But they rarely get together in conferences. Zha: This is a big problem in China. It's partly to do with education, which is quite narrow. Consequently, our scientists tend to have a rather narrow perspective and are rarely interested in things outside the field for which they are trained. Poo: It's a good point, and we are trying to fix it. CAS is piloting a so-called dual-supervisor system, in which graduate students have two supervisors from different research areas and have to spend substantial time in each lab. From the perspective of artificial intelligence, it's really important for our students to have training in both neuroscience and computing. Such trainings have to start early. Chen: This is a good idea. But I think this should start from undergraduate education, which should be a lot broader and flexible than it is now, giving students a lot more freedom to pursue their interest. Poo: I agree. Undergraduate education is very specialized these days. They have too many classes on specialized topics, and are reluctant to switch to a different field once graduated. In my view, researchers should switch fields much more often. It should be a norm rather than exception because that's how creative ideas come about. Poo: How is our artificial-intelligence research compared to developed countries? Chen: Compared to the West, China's advantage is that we have a massive market, which is developing very fast. This is, in a way, driving basic research. A problem in China is that people tend to follow what's going on in the West. I think we should be prepared to work for decades in areas that we believe are promising but are not terribly trendy or have no obvious short-term application values. For instance, perhaps we should focus more on cognitive artificial intelligence, where any breakthrough would have revolutionary impact. Zeng: As we discussed, the idea of brain-inspired artificial intelligence is not new. Some researchers have been using computer models to study cognitive psychology for decades, which are now used in artificial intelligence. I agree with Chen: it takes decades to develop an effective artificial-intelligence system. Currently, China is very short of such long-term en-deavours, but the CAS Centre of Excellence in Brain Science and Intelligence Technology is moving towards this direction. Tan: China has been working on areas such as pattern recognition for decades-even though they were not called braininspired artificial intelligence until relatively recently. The scale of investment has been quite significant, with a massive research force and lots of publications. But we tend to follow trends in the West and focus on incremental improvement of existing technologies. There is a serious lack of significant breakthroughs and we definitely lags behind the West. Poo: Why is that? Tan: It's due to China's strategic framework and the evaluation system. It's also to do with the country's science culture, especially the tendency of jigongjinli [seeking quick success and short-term gains]. Zha: I think this is also to do with the stage of development. China started from quite low level and has been playing catch up. The situation should get better once the overall level rises, and Chinese researchers can have more freedom to pursue their interest and take on risky projects regardless of what's going on in the West.

Tan:
The key is to set up platforms to bring researchers together, spur creative ideas, and share new results. There is also an urgent need to reform the evaluation system and to encourage scientists to take on long-term, risky projects. Zeng: I agree. I don't think we are short of support or money. The more I look at research in developed countries, the more I feel our shortage of ability to think differently. We are so used to follow the trend, and few of us are willing to tackle ideas that may take decades to prove. We seem to be more preoccupied with having a constant stream of papers for promotion purposes. But it's absolutely critical to be able to think differently and to have the courage tackle ideas that most people don't dare to touch, such as general intelligence, even machine consciousness. Poo: What's the relationship between international collaboration and international competition? I suspect some studies may have implications for military applications. Is that an issue when working with foreign researchers? Tan: It's definitely a balance. In my view, we will advance a lot faster if we collaborate with the best research teams in the world. Poo: In my institute, many people are afraid of collaborate with researchers in the West. They worry their good ideas would be taken by Western researchers who are faster and have the language advantage to write up results more quickly. Tan: We have similar issues in artificial intelligence. I think there are skills in international collaboration regarding what to say and what not to say. We should learn to protect our own interestin terms of either intellectual property rights, commercial interest or military applications-while we work with Western researchers.