Always Providing You With Ongoing Information

Posts tagged ‘Artificial intelligence’

Voice Imitation & Fake Videos ?

 

coco7_001

Just imagine a world where anyone could create a photo-realistic and make who ever they want to say whatever they want. Add to that the ability to write a script and have a machine recite it back with the perfectly indistinguishable intonation of the person featured. Well it’s here!

A Montreal-based AI startup has recently revealed a new voice imitation technology that could signal the end of trusting your ears, meaning pretty soon there could be a cloud of doubt over literally every “recording” you see and hear.

Three PhD students at the University of Montreal developed Lyrebird, a deep learning algorithm that reportedly needs only a 60-second sample of a person’s voice to be able to generate a synthesized copy. While the company touts applications such as speech synthesis for people with disabilities, it’s clear this technology is opening a Pandora’s box of future complications.

Lyrebird has a dedicated “Ethics” page on its website, openly discussing the potentially dangerous consequences of the technology. The company intends to release the technology publicly and make it available to anyone, with the idea being that demonstrating so visibly how voices can be artificially faked. We will all learn to become skeptical of audio recordings we hear in the future. Everyone will learn to become skeptical of audio recordings we hear in the future.

Adobe revealed aproject in late 2016 called VoCo.

 

 

 

IBM’s Has A Five Year Plan to Remake Healthcare

ibm-wafer.jpg

A silicon wafer designed to sort particles found in bodily fluids for the purpose of early disease detection.

IBM’s research labs are already working on a chip that can diagnose a potentially fatal condition faster than the best lab in the country, a camera that can see so deeply into a pill it can tell if its molecular structure has more in common with a real or counterfeit tablet, and a system that can help identify if a patient has a mental illness just from the words they use.

More work have to be done before the systems are ready for rolling out commercially. The next few years could also see IBM using artificial intelligence and new analytical techniques to produce a ‘lab on a chip’ — a pocket-sized device that would be able to analyse a single drop of blood or other bodily fluid to find evidence of bacteria, viruses, or elements like proteins that could be indicative of an illness.

 Digital manufacturing and 3D printing type technologies can put sensors in custom designed probes, which would effectively do the analysis and tell the medical professionals what they’re looking for.Rather than having to wait days or weeks after a blood test for a virus to be cultured enough for it to be identified, these tiny labs on a chip could pick up the smallest traces of the organism.

Perhaps its greatest use, however, could be allowing people to know about health conditions before any symptoms begin to show.

While analyzing the contents of a drop of blood at a nanoscale level will need huge AI processing power, the real challenge for IBM in bringing labs on a chip to market is in the silicon. Mental health, however, is one area where artificial intelligence will chew up vast quantities of data and turn it into useful information for clinicians. Over the next two years, IBM will be creating a prototype of a machine learning system that can help mental health professionals diagnose patients just from the content of their speech.

Speech is already one of the key components that doctors and psychiatrists will use to detect the onset of mental illness, checking for signs including the rate, volume, and choice of words. Now, IBM is hoping that artificial intelligence can do the same, by analyzing what a patient says or writes — from their consultations with a doctor or the content of their Twitter feeds.

IBM already has form with such tools: one of the first commercial uses of Watson, Big Blue’s cognitive computing system, was as a doctor’s assistant for cancer care. Now the company is working with hospitals and other partners to build prototypes for other cognitive tools in healthcare. IBM hopes using machine learning will make the process faster and give an additional layer of insight.

 

Mark Zuckerberg’s Robot Jarvis

Over the last year, though, Zuckerberg has spent between 100 and 150 hours on his home project. Zuckerberg’s project consist of coding with the creation of Jarvis. Morgan Freeman is the voice of Jarvis. Zuckerberg and his wife Priscilla Chan use a custom iPhone app or a Facebook Messenger bot to turn lights on and off, play music based on personal tastes, open the front gate for friends, make toast, and even wake up their one-year-old daughter Max with Mandarin lessons.

More Here

Artificial Intelligence & Dubai Police Dept

 

In addition to its fleet of supercars, the Dubai Police are now enlisting the help of...

In addition to its fleet of supercars, the Dubai Police are now enlisting the help of Crime Prediction software(Credit: Abdullah AlBargan via Flickr (CC BY-ND 2.0))

The Dubai Police department not only have luxury cars, they also have artificial intelligence crime prediction software.

The Dubai Police is the latest to have AI backup, in the form of Space Imaging Middle East’s (SIME) new Crime Prediction software.

SIME’s software is said to work like others already in use: machine learning algorithms are fed existing data and intelligence from police databases to identify patterns that human crime analysts might miss. The system can then use those patterns to determine where crime may be likely to occur next, and inform a police force where to best deploy officers and other resources. The idea isn’t just to go there and arrest suspicious-looking people, but to use a larger police presence in an area to deter crime from happening in the first place.

Does It Work? Police departments in various US cities have been using systems like Predpol, HunchLab and Series Finder for years, with mixed results and uneasy moral implications. After using HunchLab for 12 months, the St Louis County Police department expects a drop in this year’s crime statistics, but the results are hard to measure as a direct effect of predictive policing.

SIME hasn’t given many details on exactly how its system works or if it’s built to overcome some of these issues, but others like HunchLab are actively trying to be transparent about its inner workings.

The White house Report On Artificial Intelligence & America’s Employees

purple-sweater2_002

The White House released a new report this week entitled Artificial Intelligence, Automation, and the Economy, as part of an admirable but very flawed initiative to understand the impact of the new technology on American employees.

The White House said, “Accelerating AI capabilities will enable automation of some tasks that have long required human labor”. The report says some low wage jobs will become obsolete. Research consistently finds that the jobs that are threatened by automation are highly concentrated among lower-paid, lower-skilled, and less-educated workers. This means that automation will continue to put downward pressure on demand for this group, putting downward pressure on wages and upward pressure on inequality.  Robots are taking orders and making food; customers are growing accustomed to the lack of human interaction.

These transformations will open up new opportunities for individuals, the economy, and society, on the other hand, has the potential to disrupt the current livelihoods of millions of Americans. Whether AI leads to unemployment and increases in inequality over the long-run depends not only on the technology itself but also on the institutions and policies that are in place. 

The advent of computers and the Internet raised the relative productivity of higher skilled workers. Routine-intensive occupations that focused on predictable, easily-programmable tasks—such as switchboard operators, filing clerks, travel agents, and assembly line workers— were particularly vulnerable to replacement by new technologies. Some occupations were virtually eliminated and demand for others reduced. Research suggests that technological innovation over this period increased the productivity of those engaged in abstract thinking, creative tasks, and problem-solving and was therefore at least partially responsible for the substantial growth in jobs employing such traits. Shifting demand towards more skilled labor raised the relative pay of this group, contributing to rising inequality. AI is not a single technology, but rather a collection of technologies that are applied to specific tasks, the effects of AI will be felt unevenly through the economy. Some tasks will be more easily automated than others, and some jobs will be affected more than others—both negatively and positively. Some jobs may be automated away, while for others, AI-driven automation will make many workers more productive and increase demand for certain skills. Finally, new jobs are likely to be directly created in areas such as the development and supervision of AI as well as indirectly created in a range of areas throughout the economy as higher incomes lead to expanded demand. Recent research suggests that the effects of AI on the labor market in the near term will continue the trend that computerization and communication innovations have driven in recent decades. Researchers’ estimates on the scale of threatened jobs over the next decade or two range from 9 to 47 percent.

The report suggests three broad strategies for addressing the impacts of AI-driven automation across the whole U.S. economy:

  1. Invest in and develop AI for its many benefits;
  2. Educate and train Americans for jobs of the future; and
  3. Aid workers in the transition and empower workers to ensure broadly shared growth.
 

 

Artificial Intelligence & Fake News

Snapshot_143

Research into artificial intelligence and deep-learning neural networks is now expanding into multimedia manipulation, a field that may have deep implications for news and social media in coming years. AI researchers have been successfully creating 3D face models from still 2D images, generating sound effects from silent video, and even mapping the facial expressions of actors onto other people in videos. So far, these technologies have only been used to make amusing YouTube videos, but in the future they could be used to generate completely believable fake news.

Smile Vector, a Twitter bot that can make any still photo of a person’s face smile. The app uses a neural network powered by deep learning to automatically morph the expressions on a still photo.

it is believed that this new technology will be useful in the creative industries, allowing anyone from furniture designers to video game developers to use in new ideas. . The creators of a program called Face2Face have released a demo video showing the facial expressions of an actor being realistically projected onto the faces of Trump, Putin, and Obama. The new Adobe VoCo software allows a user to re-write any recorded speech or use this technology to have any celebrity or world leader appear to be saying something that they did not in reality.

 

Facebook Creating Artificial Intelligence To flag Offensive Live Videos

brown & beige5_001

Facebook  is working on automatically flagging offensive material in live video streams,  using artificial intelligence to monitor content.

The social media company has been embroiled in a number of content controversies this year, from facing international outcry after removing an iconic Vietnam War photo due to nudity, to allowing the spread of fake news on its site.

Facebook has historically relied mostly on users to report offensive posts. Facebook has been increasingly using artificial intelligence with “an algorithm that detects nudity, violence, or any of the things that are not according to their policies.

Tag Cloud

%d bloggers like this: