The list of occupations that will be decimated by artificial intelligence and automation is becoming larger and larger with drivers, translators and shop assistants under threat from the rise of the robots,.Now you can add lawyers to the list.
A contest that took place last month pitched over 100 lawyers from many of London’s ritziest firms against an artificial intelligence program called Case Cruncher Alpha.
Both the humans and the AI were given the basic facts of hundreds of PPI (payment protection insurance) and asked to predict whether the Financial Ombudsman would allow a claim.
In all, they submitted 775 predictions and the computer won hands down, with Case Cruncher getting an accuracy rate of 86.6%, compared with 66.3% for the lawyers.
Case Cruncher is not the product of a tech giant but the brainchild of four Cambridge law students. They started out with a simple chatbot that answered legal questions – a bit of a gimmick but it caught on.
Two judges oversaw the competition, Cambridge law lecturer Felix Steffek and Ian Dodd from a company called Prediction, which runs one of the world’s biggest databases of legal cases. He says the youthful Case Cruncher team chose the subject for the contest well.
Ian Dodd thinks AI may replace some of the grunt work done by junior lawyers and paralegals but no machine can talk to a client or argue in front of a High Court judge. He puts it simply: “The knowledge jobs will go, the wisdom jobs will stay.”
Ed Barton and his UK-based startup Curiscope created a blend of virtual reality (VR) and augmented reality (AR). Using an anatomy VR app and the company’s Virtuali-Tee, a t-shirt, they are allowing people to see inside of their own chest cavities. This technology works using a highly-stylized QR code printed on the front of the t-shirt. When you scan the code with the corresponding app, you can explore throughout the chest cavity, including the heart and lungs.
Instead of entering a hotel search and receiving a page with hundreds of options, new data-driven travel agents—using humans, AI or both—are tailoring options based on a traveler’s personal preferences. These new agents use chatbots or messaging to communicate with travel bookers. Elaine Glusac, writing at The New York Times, offers these examples of data-driven travel planners.
Pana caters to frequent travelers. For a monthly fee, Pana is available 24 hours. It uses member profiles and past trips to funnel travel requests to human agents.
Mezi uses chatbots to handle travel booking. If a complicated issue arises then humans get involved; afterward they train the bots to handle it in the future. The more you book with Mezi, the more it learns about your preferences.
Savanti Travel helps frequent travelers cut costs while gaining status with travel companies. It doesn’t operate on commission to avoid the urge to find more expensive bookings.
Hello Hipmunk is a travel-planning messaging system. It runs through Facebook Messenger, Skype or Slack, and lets you topic hop as if you were talking to a human. It can offer tips such as on the cheapest times to travel.
Flightfox specializes in complicated itineraries. The service books flights only; for a fee, agents find the best prices and send you links so you can do the booking yourself. It also uses points systems to find the best deals.
By 2019, 3D printing is expected to be a crucial tool in up to 35 per cent of surgeries.
In 2021, artificial intelligence (AI) is due to assist doctors in treating patients.
AI ‘chatbots’ are expected to outperform humans at some surgical procedures in 2030.
And in 2035, our senses will be able to be upgraded with implants that detect X-rays.
In the future, patients will still need specialists with expert knowledge but the difference is that advanced AI systems will assist healthcare practitioners by providing clinical and medical solutions; sometimes eliminating the need to see a doctor at all.
Microsoft Research is developing technology which may end up in the next version Microsoft’s classroom software. In a recent publication, Microsoft Research describes an AI-driven system which could help teachers automatically assess reading performance for students, saving them time and allowing more individual attention to students who need it the most. Their research paper, “Automatic Evaluation of Children Reading Aloud on Sentences and Pseudo words,” automatically predicts the overall reading aloud ability of primary school children (6-10 years old), based on the reading of sentences and pseudo words.
Just imagine a world where anyone could create a photo-realistic and make who ever they want to say whatever they want. Add to that the ability to write a script and have a machine recite it back with the perfectly indistinguishable intonation of the person featured. Well it’s here!
A Montreal-based AI startup has recently revealed a new voice imitation technology that could signal the end of trusting your ears, meaning pretty soon there could be a cloud of doubt over literally every “recording” you see and hear.
Three PhD students at the University of Montreal developed Lyrebird, a deep learning algorithm that reportedly needs only a 60-second sample of a person’s voice to be able to generate a synthesized copy. While the company touts applications such as speech synthesis for people with disabilities, it’s clear this technology is opening a Pandora’s box of future complications.
Lyrebird has a dedicated “Ethics” page on its website, openly discussing the potentially dangerous consequences of the technology. The company intends to release the technology publicly and make it available to anyone, with the idea being that demonstrating so visibly how voices can be artificially faked. We will all learn to become skeptical of audio recordings we hear in the future. Everyone will learn to become skeptical of audio recordings we hear in the future.
Adobe revealed aproject in late 2016 called VoCo.
A silicon wafer designed to sort particles found in bodily fluids for the purpose of early disease detection.
IBM’s research labs are already working on a chip that can diagnose a potentially fatal condition faster than the best lab in the country, a camera that can see so deeply into a pill it can tell if its molecular structure has more in common with a real or counterfeit tablet, and a system that can help identify if a patient has a mental illness just from the words they use.
More work have to be done before the systems are ready for rolling out commercially. The next few years could also see IBM using artificial intelligence and new analytical techniques to produce a ‘lab on a chip’ — a pocket-sized device that would be able to analyse a single drop of blood or other bodily fluid to find evidence of bacteria, viruses, or elements like proteins that could be indicative of an illness.
Perhaps its greatest use, however, could be allowing people to know about health conditions before any symptoms begin to show.
While analyzing the contents of a drop of blood at a nanoscale level will need huge AI processing power, the real challenge for IBM in bringing labs on a chip to market is in the silicon. Mental health, however, is one area where artificial intelligence will chew up vast quantities of data and turn it into useful information for clinicians. Over the next two years, IBM will be creating a prototype of a machine learning system that can help mental health professionals diagnose patients just from the content of their speech.
Speech is already one of the key components that doctors and psychiatrists will use to detect the onset of mental illness, checking for signs including the rate, volume, and choice of words. Now, IBM is hoping that artificial intelligence can do the same, by analyzing what a patient says or writes — from their consultations with a doctor or the content of their Twitter feeds.
IBM already has form with such tools: one of the first commercial uses of Watson, Big Blue’s cognitive computing system, was as a doctor’s assistant for cancer care. Now the company is working with hospitals and other partners to build prototypes for other cognitive tools in healthcare. IBM hopes using machine learning will make the process faster and give an additional layer of insight.