Microsoft’s Project Adam: a deep-learning ‘brain’ that can see, hear, and read

Fahad Al-Riyami

Looking for more info on AI, Bing Chat, Chat GPT, or Microsoft's Copilots? Check out our AI / Copilot page for the latest builds from all the channels, information on the program, links, and more!

Microsoft’s Project Adam, a deep-learning “brain” that can see, hear, and read

The mad-scientists over at Microsoft Research have been hard at work creating a computer system that they hope will evolve into “true artificial intelligence.” While a computer than can emulate the human brain as it is today is still a far way off, Microsoft believes that a computer than can recognize speech, interpret images, and read documents is a good foundation to build on.

Microsoft’s Project Adam is a deep-learning system that utilizes a massive data set of information to help it better recognize speech and classify images. For example, Project Adam can look at a picture of a dog and be able to identify its breed, and that breeds sub-breeds, in addition to warning you if that bug you just took a picture of is poisonous.

The system is designed with the help of technology from the University of Wisconsin. It mimics the human brain in a sense as it has it has a number of processors (think neurons) that can work independently of each other, but write to the same memory location. With nothing preventing them from overwriting each other’s data, it might sound like a giant mess but it works in certain situations.

Google on the other hand, are also using a similar artificial brain that is today used to perform some of the computational calculations required across Google services. Although Microsoft claims that Project Adam out-shines the competition in efficiency; requiring 30x fewer machines, and having 2x the accuracy of competing systems.

According to Microsoft Research head Peter Lee, there are no plans to release an app for Project Adam yet, but there is a lot of potential for the system in e-commerce, robotics, and sentiment analysis.