Artificial Intelligence Business:How you can profit from AI
上QQ阅读APP看书,第一时间看更新

A short history of Artificial Intelligence

The term artificial intelligence was used the first time in 1955 by John McCarthy, a math professor at Dartmouth who organized the seminal conference on the topic the following year. In 1957 the economist Herbert Simon predicted that computers would beat humans at chess within 10 years (he was slightly wrong, it took 40). In 1967 the cognitive scientist Marvin Minsky said, “Within a generation, the problem of creating ‘artificial intelligence’ will be substantially solved.” Simon and Minsky were both intellectual giants, but they were wrong about AI badly. These dramatic but wrong claims caused various repercussions in how people in the second half of 20th century thought about AI: more as a subject of a science fiction novel than actual science.

The idea of an Artificial Intelligence, automation of certain repetitive processes, dates back to the Cold War when US intelligence was trying to translate Russian documents and reports automatically. The initial optimism of the ’50s was then undermined by the underperformance of these early models and fundamental lack of progress past initial results. With a lack of optimism, funding was cut substantially, and the academic community turned away from AI, especially in the 1970s, when DARPA cut its funding. This period of lowered funding and loss of interest was later called the AI winter.

A renewal of AI came in the 1980s with LISP, a programming language, and LISP machines, computers optimized to run LISP code, which was the default language for doing AI research at that time. A couple of companies were producing those computers and selling them commercially with some initial successes, but eventually, they were overtaken by personal computers as we know them today. Again results were not good enough as to what was promised in research proposals. The second AI winter began.

The start of the new era of Artificial Intelligence is dated from 2009-2012. The community of AI researchers was steadily growing from the 1990s and the early 2000s, with larger grants and some interest from corporations, but the most significant catalyst for the current AI revolution came in 2009 with the creation of ImageNet, a large visual database designed to test image recognition algorithms on. Then on 30 September 2012, a convolutional neural network called AlexNet achieved a top-5 error of 15.3% in the ImageNet 2012 Challenge, more than 10.8 percentage points lower than that of the runner up, beating classical algorithms. This was made feasible due to the use of Graphics Processing Units (GPUs) during training, an essential ingredient of the deep learning revolution that was about to start. Suddenly people started to pay attention, not just within the AI community but across the technology industry as a whole and the current revolution began.

The ideas we use today in AI research like neural networks were to some extent known already 30 or 40 years ago. However, what was missing was enough data and enough computing power to process this data. Machine learning couldn’t take off without access to large datasets, and thanks to the digital revolution we lived through in the early 2000s, suddenly many Internet companies emerged, older companies became digitized, and there was more data than any human could process. On the other hand, assuming Moore’s law being true, the power of computers has been doubling every 18 months, reaching the necessary power to process big data as it was created, or at least enough of it to make neural networks work. And once that was established, everyone came to AI research again: governments, corporations, scientists.