Instead of entering a hotel search and receiving a page with hundreds of options, new data-driven travel agents—using humans, AI or both—are tailoring options based on a traveler’s personal preferences. These new agents use chatbots or messaging to communicate with travel bookers. Elaine Glusac, writing at The New York Times, offers these examples of data-driven travel planners.
Pana caters to frequent travelers. For a monthly fee, Pana is available 24 hours. It uses member profiles and past trips to funnel travel requests to human agents.
Mezi uses chatbots to handle travel booking. If a complicated issue arises then humans get involved; afterward they train the bots to handle it in the future. The more you book with Mezi, the more it learns about your preferences.
Savanti Travel helps frequent travelers cut costs while gaining status with travel companies. It doesn’t operate on commission to avoid the urge to find more expensive bookings.
Hello Hipmunk is a travel-planning messaging system. It runs through Facebook Messenger, Skype or Slack, and lets you topic hop as if you were talking to a human. It can offer tips such as on the cheapest times to travel.
Flightfox specializes in complicated itineraries. The service books flights only; for a fee, agents find the best prices and send you links so you can do the booking yourself. It also uses points systems to find the best deals.
By 2019, 3D printing is expected to be a crucial tool in up to 35 per cent of surgeries.
In 2021, artificial intelligence (AI) is due to assist doctors in treating patients.
AI ‘chatbots’ are expected to outperform humans at some surgical procedures in 2030.
And in 2035, our senses will be able to be upgraded with implants that detect X-rays.
In the future, patients will still need specialists with expert knowledge but the difference is that advanced AI systems will assist healthcare practitioners by providing clinical and medical solutions; sometimes eliminating the need to see a doctor at all.
Microsoft Research is developing technology which may end up in the next version Microsoft’s classroom software. In a recent publication, Microsoft Research describes an AI-driven system which could help teachers automatically assess reading performance for students, saving them time and allowing more individual attention to students who need it the most. Their research paper, “Automatic Evaluation of Children Reading Aloud on Sentences and Pseudo words,” automatically predicts the overall reading aloud ability of primary school children (6-10 years old), based on the reading of sentences and pseudo words.
Just imagine a world where anyone could create a photo-realistic and make who ever they want to say whatever they want. Add to that the ability to write a script and have a machine recite it back with the perfectly indistinguishable intonation of the person featured. Well it’s here!
A Montreal-based AI startup has recently revealed a new voice imitation technology that could signal the end of trusting your ears, meaning pretty soon there could be a cloud of doubt over literally every “recording” you see and hear.
Three PhD students at the University of Montreal developed Lyrebird, a deep learning algorithm that reportedly needs only a 60-second sample of a person’s voice to be able to generate a synthesized copy. While the company touts applications such as speech synthesis for people with disabilities, it’s clear this technology is opening a Pandora’s box of future complications.
Lyrebird has a dedicated “Ethics” page on its website, openly discussing the potentially dangerous consequences of the technology. The company intends to release the technology publicly and make it available to anyone, with the idea being that demonstrating so visibly how voices can be artificially faked. We will all learn to become skeptical of audio recordings we hear in the future. Everyone will learn to become skeptical of audio recordings we hear in the future.
Adobe revealed aproject in late 2016 called VoCo.
A silicon wafer designed to sort particles found in bodily fluids for the purpose of early disease detection.
IBM’s research labs are already working on a chip that can diagnose a potentially fatal condition faster than the best lab in the country, a camera that can see so deeply into a pill it can tell if its molecular structure has more in common with a real or counterfeit tablet, and a system that can help identify if a patient has a mental illness just from the words they use.
More work have to be done before the systems are ready for rolling out commercially. The next few years could also see IBM using artificial intelligence and new analytical techniques to produce a ‘lab on a chip’ — a pocket-sized device that would be able to analyse a single drop of blood or other bodily fluid to find evidence of bacteria, viruses, or elements like proteins that could be indicative of an illness.
Perhaps its greatest use, however, could be allowing people to know about health conditions before any symptoms begin to show.
While analyzing the contents of a drop of blood at a nanoscale level will need huge AI processing power, the real challenge for IBM in bringing labs on a chip to market is in the silicon. Mental health, however, is one area where artificial intelligence will chew up vast quantities of data and turn it into useful information for clinicians. Over the next two years, IBM will be creating a prototype of a machine learning system that can help mental health professionals diagnose patients just from the content of their speech.
Speech is already one of the key components that doctors and psychiatrists will use to detect the onset of mental illness, checking for signs including the rate, volume, and choice of words. Now, IBM is hoping that artificial intelligence can do the same, by analyzing what a patient says or writes — from their consultations with a doctor or the content of their Twitter feeds.
IBM already has form with such tools: one of the first commercial uses of Watson, Big Blue’s cognitive computing system, was as a doctor’s assistant for cancer care. Now the company is working with hospitals and other partners to build prototypes for other cognitive tools in healthcare. IBM hopes using machine learning will make the process faster and give an additional layer of insight.
Over the last year, though, Zuckerberg has spent between 100 and 150 hours on his home project. Zuckerberg’s project consist of coding with the creation of Jarvis. Morgan Freeman is the voice of Jarvis. Zuckerberg and his wife Priscilla Chan use a custom iPhone app or a Facebook Messenger bot to turn lights on and off, play music based on personal tastes, open the front gate for friends, make toast, and even wake up their one-year-old daughter Max with Mandarin lessons.
In addition to its fleet of supercars, the Dubai Police are now enlisting the help of Crime Prediction software(Credit: Abdullah AlBargan via Flickr (CC BY-ND 2.0))
The Dubai Police department not only have luxury cars, they also have artificial intelligence crime prediction software.
The Dubai Police is the latest to have AI backup, in the form of Space Imaging Middle East’s (SIME) new Crime Prediction software.
SIME’s software is said to work like others already in use: machine learning algorithms are fed existing data and intelligence from police databases to identify patterns that human crime analysts might miss. The system can then use those patterns to determine where crime may be likely to occur next, and inform a police force where to best deploy officers and other resources. The idea isn’t just to go there and arrest suspicious-looking people, but to use a larger police presence in an area to deter crime from happening in the first place.
Does It Work? Police departments in various US cities have been using systems like Predpol, HunchLab and Series Finder for years, with mixed results and uneasy moral implications. After using HunchLab for 12 months, the St Louis County Police department expects a drop in this year’s crime statistics, but the results are hard to measure as a direct effect of predictive policing.
SIME hasn’t given many details on exactly how its system works or if it’s built to overcome some of these issues, but others like HunchLab are actively trying to be transparent about its inner workings.