Always Providing You With Ongoing Information

Posts tagged ‘Artificial intelligence’

AI Robots Soon Will Determine If Your Marriage Failing


Imperial College Business School and dating website eharmony has investigated how AI will change dating in the next few years.

It found in-home listening devices such as Alexa and Google Home could predict with 75% accuracy the likelihood of a couple’s relationship ending.

By 2021 the popular AI assistants could analyze the acoustics of conversations between couples.When an argument breaks out, robots could even intervene with suggestions of a resolution.

The Future of Dating Report also believes DNA analysis could be routinely used to match people with their perfect partner.The report also explored how technology will better aid success in online dating. Most dating services use AI and machine learning to filter possible matches by analysing self-reported feelings and can capture behavioral data captured by wearable tech.

Nowadays, algorithms can predict compatibility, enhance dating experiences and help manufacture chemistry

Artificial Intelligence & Jail Time


 ProPublica found that algorithms tend to reinforce racial bias in law enforcement data. Algorithmic assessments tend to falsely flag black defendants as future criminals at almost twice the rate as white defendants. What is more, the judges who relied on these risk-assessments typically did not understand how the scores were computed.

This is problematic, because machine learning models are only as reliable as the data they’re trained on. If the underlying data is biased in any form, there is a risk that structural inequalities and unfair biases are not just replicated, but also amplified. So, AI engineers must be especially wary of their blind spots and implicit assumptions; it is not just the choice of machine learning techniques that matters, but also all the small decisions about finding, organizing and labeling training data for AI models.

In order to guard against unfair bias, all subjects should have an equal chance of being represented in the data. Sometimes this means that underrepresented populations need to be thoughtfully added to any training datasets.

NIT Study Reveals That Artificial Intelligence Is Racist & Sexist

An MIT study has revealed the way artificial intelligence system collect data often makes them racist and sexist.

Researchers looked at a host of systems, and found many of them exhibited a outrageous bias.

The team then developed system to help researchers make sure their systems are less biased.

Computer scientists are often quick to say that the way to make these systems less biased is to simply design better algorithms,’ said lead author Irene Chen, a PhD student who wrote the paper with MIT professor David Sontag and postdoctoral associate Fredrik D. Johansson.

‘But algorithms are only as good as the data they’re using, and our research shows that you can often make a bigger difference with better data.’

In one example, the team looked at an income-prediction system and found that it was twice as likely to misclassify female employees as low-income and male employees as high-income. 

They found that if they had increased the dataset by a factor of 10, those mistakes would happen 40 percent less often.

In another dataset, the researchers found that a system’s ability to predict intensive care unit (ICU) mortality was less accurate for Asian patients. 

However, the researchers warned existing approaches for reducing discrimination would make the non-Asian predictions less accurate 

Chen says that one of the biggest misconceptions is that more data is always better. Instead, researchers should get more data from those under-represented groups.

‘We view this as a toolbox for helping machine learning engineers figure out what questions to ask of their data in order to diagnose why their systems may be making unfair predictions,’ says Sontag.

The team will present the paper in December at the annual conference on Neural Information Processing Systems (NIPS) in Montreal. 

Warning AI Is Watching When Tempted to divert Expenses To A Work Dinner

One employee traveling for work checked his pooch into a kennel and billed it to his boss as a hotel expense. Another charged yoga classes to the corporate credit card as client entertainment. A third, after racking up a small fortune at a strip club, submitted the expense as a steakhouse business dinner.

These bogus or phony expenses, which occurred recently at major U.S. companies, have one thing in common: All were exposed by artificial intelligence algorithms that can in a matter of seconds sniff out fraudulent claims and forged receipts that are often undetectable to human auditors—certainly not without hours of tedious labor.

A company an 18-month-old AI accounting startup, named AppZen, has already signed up several big companies, including Inc., International Business Machine Corp., Inc. and Comcast Corp. and claims to have saved its clients $40 million in fraudulent expenses. AppZen and traditional firms like Oversight Systems say their technology isn’t ousting  jobs —so far—but rather freeing up auditors to dig deeper into dubious claims and educate employees about travel and expense policies.

A report released in April, by the Association of Certified Fraud Examiners analyzed 2,700 fraud cases from January 2016 to October 2017 that resulted in losses of $7 billion.

The world’s largest anti-fraud organization found travel and expense embezzlement typically accounts for about 14 percent of employee fraud. It has become easier to fool finance departments thanks to websites such as that make it easy to create a bogus paper trail.

The algorithms have already exposed some creative—and costly—frauds: employees tacking on bottles of vodka to their “work lunch” bill, buying $3,000 worth of Starbucks gift cards and claiming it as “coffee with a contact.” One employee expensed her $900 office farewell party and submitted a claim that contained an animated photograph of her face instead of any receipts—demonstrating how seriously she took the auditors.

Guido van Drunen, a principal in KPMG’s Forensic Advisory Services, believes some lower-level jobs will disappear as more and more companies adopt the technology in the coming years. But he says there’s no way AI can spot all the sneaky ways employees try to defraud their employers.

Info obtained from Accounting Today

China’s World’s First AI News Anchor


China’s Xinhua News Agency has debuted an artificial intelligence (AI) news anchor on Wednesday as the state-run media organization seeks to bring a “brand new” news experience to the world.

A report posted on YouTube by New China TV features a life-like, English-speaking “AI anchor” modeled after one of Xinhua’s actual presenters named Qiu Hao.

Explaining he is programmed to read texts typed into his system, the digital presenter said he would deliver the news without interruption.

“I will work tirelessly to keep you informed as texts will be typed into my system uninterrupted,” it said.

According to the South China Morning Post, Xinhua said its new AI anchors have officially become members of the Xinhua News Agency reporting team and will work with other anchors to bring authoritative, timely and accurate news information in both Chinese and English.

It was also hinted that AI anchors may one day “challenge” their human counterparts because of their ability to work 24 hours a day provided human editors keep inputting text into the system.


AI & Your Job


Many Compainies Rushing To Adopt AI But Stumbling


  • Companies that are rushing to embrace artificial intelligence technologies are running into big problems with their data.
  • Some companies don’t have enough data, others have it in disparate places, and still others don’t have it in a usable format.
  • Because of such challenges, some early adopters have abandoned AI projects.

AI generally requires lots of data. But it needs to be the right kind of data, in very particular kinds of formats. And it often needs it to be “clean,” including only the kind of information it needs and none of what it doesn’t

Data-related issues are collectively the biggest challenge companies face with AI, said Paul Daugherty, chief technology and innovation officer of Accenture.

All of that adds up to a big problem for many businesses. The biggest challenge most organizations face when they start thinking about AI is their data,” said Paul Daugherty, the chief technology and innovation officer of consulting firm Accenture, in an interview earlier this month. He says “Often we’re seeing that that’s the big area that companies need to invest in.” In a recent survey by consulting firm Deloitte, a plurality of executives at companies that are early adopters of AI ranked “data issues” as the biggest challenge they faced in rolling out the technology. Some 16% said it was the toughest problem they confronted with AI, and 39% said it ranked in the top three.

Some companies don’t have the data they need. Others have databases or data stores that aren’t in good shape to be tapped by AI. Still others are dealing with issues related to trying to keep their data secure or maintain users’ privacy as they prepare for it to be used by AI systems.

Tag Cloud

%d bloggers like this: