Microsoft Research is developing technology which may end up in the next version Microsoft’s classroom software. In a recent publication, Microsoft Research describes an AI-driven system which could help teachers automatically assess reading performance for students, saving them time and allowing more individual attention to students who need it the most. Their research paper, “Automatic Evaluation of Children Reading Aloud on Sentences and Pseudo words,” automatically predicts the overall reading aloud ability of primary school children (6-10 years old), based on the reading of sentences and pseudo words.
A silicon wafer designed to sort particles found in bodily fluids for the purpose of early disease detection.
IBM’s research labs are already working on a chip that can diagnose a potentially fatal condition faster than the best lab in the country, a camera that can see so deeply into a pill it can tell if its molecular structure has more in common with a real or counterfeit tablet, and a system that can help identify if a patient has a mental illness just from the words they use.
More work have to be done before the systems are ready for rolling out commercially. The next few years could also see IBM using artificial intelligence and new analytical techniques to produce a ‘lab on a chip’ — a pocket-sized device that would be able to analyse a single drop of blood or other bodily fluid to find evidence of bacteria, viruses, or elements like proteins that could be indicative of an illness.
Perhaps its greatest use, however, could be allowing people to know about health conditions before any symptoms begin to show.
While analyzing the contents of a drop of blood at a nanoscale level will need huge AI processing power, the real challenge for IBM in bringing labs on a chip to market is in the silicon. Mental health, however, is one area where artificial intelligence will chew up vast quantities of data and turn it into useful information for clinicians. Over the next two years, IBM will be creating a prototype of a machine learning system that can help mental health professionals diagnose patients just from the content of their speech.
Speech is already one of the key components that doctors and psychiatrists will use to detect the onset of mental illness, checking for signs including the rate, volume, and choice of words. Now, IBM is hoping that artificial intelligence can do the same, by analyzing what a patient says or writes — from their consultations with a doctor or the content of their Twitter feeds.
IBM already has form with such tools: one of the first commercial uses of Watson, Big Blue’s cognitive computing system, was as a doctor’s assistant for cancer care. Now the company is working with hospitals and other partners to build prototypes for other cognitive tools in healthcare. IBM hopes using machine learning will make the process faster and give an additional layer of insight.
Research into artificial intelligence and deep-learning neural networks is now expanding into multimedia manipulation, a field that may have deep implications for news and social media in coming years. AI researchers have been successfully creating 3D face models from still 2D images, generating sound effects from silent video, and even mapping the facial expressions of actors onto other people in videos. So far, these technologies have only been used to make amusing YouTube videos, but in the future they could be used to generate completely believable fake news.
Smile Vector, a Twitter bot that can make any still photo of a person’s face smile. The app uses a neural network powered by deep learning to automatically morph the expressions on a still photo.
it is believed that this new technology will be useful in the creative industries, allowing anyone from furniture designers to video game developers to use in new ideas. . The creators of a program called Face2Face have released a demo video showing the facial expressions of an actor being realistically projected onto the faces of Trump, Putin, and Obama. The new Adobe VoCo software allows a user to re-write any recorded speech or use this technology to have any celebrity or world leader appear to be saying something that they did not in reality.
Uber app will have drivers in several US cities to periodically upload selfies before they can begin their shifts. Uber will then use Microsoft’s cognitive-recognition AI to compare the uploaded images with the selfie the company already has on file for the driver.
If the two don’t match, Uber will temporarily suspend the driver’s account while it investigates. This is to prevent fraud.