AI is an advancement in computer technology. Beyond taking inputs from humans and processing that data based on its programming, artificial intelligence software are capable of making complex decisions from data sets with or without human intervention.
Ultra flat wafers continue to play a central role in AI chips as they do with personal computers.
Artificial intelligence programs require hardware similar to computers, which use silicon wafers. Surprisingly, AI processors run on graphics processing units instead of the central processing units used by computers to run their operating systems. This is because GPUs have capabilities that benefit AI.
Most people know GPUs as the driving force behind beautiful imagery in video games. Graphics processing is a resource-intensive task. Because of this, GPUs are designed to process tasks in parallel rather than in series like CPUs do. Only parallel processing can make simultaneous rendering of multiple visuals possible.
Artificial intelligence also processes numerous pieces of data simultaneously to make decisions during training. Since both GPUs and AI employ parallel procedures, it makes sense that intelligent machines would rely on graphics cards rather than computer processors.
Moore’s Law refers to a conjecture by Intel co-founder George Moore theorizing that the number of transistors in computer processors doubles every two years. As a result, modern CPUs contain billions of transistors crammed into a chip the size of one’s palm.
Experts now think that the theory may have reached its boundaries. Designers are running out of methods to shrink the lithographic space to accommodate two times more transistors in new processors.
Despite this progress, NVIDIA CEO Jensen Huang said in 2017 that processor performance only grows by 10% with every new release. Coincidentally, Huang also said that year, he sees GPUs acquiring more applications in specialized fields, including AI.