This new artificial intelligence, introduced by Microsoft, seems perfectly suited to help the visually impaired. According to Microsoft, the technology in question is able to describe the content of images as precisely as humans.
An AI that can describe images with great precision
Google offered similar technology back in 2016, but this time Microsoft teams say they went even further. According to the company, its artificial intelligence researchers have developed an artificial intelligence system that is “more precise than that of humans”; H. A model that can describe pictures with unsettling precision. A particularly interesting technology for the visually impaired or blind. Microsoft already offers its technology in Azure services. This means that all developers can incorporate it into their applications.
In the same category
Face Recognition: Data leaks are common in China
The technology is also available in Seeing AI, the Microsoft app for the blind and visually impaired, which has been available in five different languages for a few months. This artificial intelligence enables blind people to “see better” to promote social inclusion around the world. As Microsoft explains, labeling images is one of the most difficult problems for AI to solve. This new artificial intelligence answers them perfectly.
Specific keyword learning
Eric Boyd, vice president of Azure AI, believes, “This not only requires understanding the objects in a scene, but also how they interact and describe them … Our artificial intelligence makes it easier to find images than you search in search engines. And for the visually impaired users, internet surfing and software can be vastly improved.
Xuedong Huang, CTO at Azure AI, insisted that this technology be quickly integrated with the Azure platform so that users can be served quickly. The algorithm of this artificial intelligence was formed by a model of images that are marked with certain keywords. This has helped it add more functionality that most other artificial intelligence models don’t. Similar models are usually created with full pictures and labels, making it difficult for models to fully understand the area around the picture.
Xuedong Huang explains: “This pre-training in the visual vocabulary is a necessary step in order to educate and train the system. We try to teach our artificial intelligence to learn for itself. “Obviously, this is why this new model has an edge over other solutions on the market. Artificial intelligence is now able to label pictures that it has never seen before. The real test of the Microsoft model will now be how it works in the real world.