Always Providing You With Ongoing Information

Posts tagged ‘Artificial intelligence’

Fighting Opiod Addiction With Artificial Intelligence


Courtesy of WVU Today WVU’s Fanny Ye has been awarded a grant that will support her research to develop AI techniques to combat the opioid epidemic.

Yanfang (Fanny) Ye, assistant professor of computer science and electrical engineering at West Virginia University, has been awarded a grant from the National Institute of Justice in support of her work to develop novel artificial intelligence techniques to combat the opioid epidemic and trafficking. The award comes with about $1 million in funding over a three-year period.


“As of today, we still lack deep insight into the online ecosystem of opioid trafficking,” said Ye. “In addition to offline data, utilizing AI technologies to obtain knowledge and recognize patterns from online data across the darknet and surface net could provide valuable investigative leads, which might greatly facilitate law enforcement’s ability to prevent, respond to and disrupt opioid trafficking networks.”


As part of the grant, Ye, in collaboration with Xin Li, professor of computer science and electrical engineering, will design and develop new AI technologies to automate the analysis of large-scale surface net and darknet data to provide timely investigative leads to law enforcement agencies in the United States to combat opioid trafficking.


Dr. Ye and Dr. Li will use sophisticated pattern recognition research that can have a significant impact on disrupting the supply chain underlying opioid trafficking.”

Ye has extensive research and development experience in Internet security solutions. Before joining WVU, she was the principal scientist in Comodo Security Solutions, Inc., a provider of computer software and SSL digital certificates, and deputy director at Kingsoft Internet Security Corporation, the second biggest Internet security company in China. Ye proposed and developed cloud-based solutions for mining big data in the area of Internet security, especially for malware detection and adversarial machine learning. Her developed algorithms and systems have been incorporated into popular commercial products, including Comodo Internet Security and Kingsoft Antivirus that protect millions of users worldwide.


She also recently received the prestigious AICS 2019 Challenge Problem Winner, the ACM SIGKDD 2017 Best Paper and Best Student Paper awards (Applied Data Science Track), the IEEE EISIC 2017 Best Paper Award and the 2017 New Researcher of the Year Award from the Statler College. Ye has brought in nearly $2.5 million in research funding to WVU in the past two years.



Vatican & Microsoft Team Up To Promote Ethics In Artificial Intelligence.


The Vatican is teaming up with Microsoft on an academic prize to promote ethics in artificial intelligence.

Pope Francis met privately on Wednesday with Microsoft President Brad Smith and the head of a Vatican scientific office that promotes Catholic Church positions on human life.

The Vatican said Smith and Archbishop Vincenzo Paglia of the Pontifical Academy for Life told Francis about the international prize for an individual who has successfully defended a dissertation on ethical issues involving artificial intelligence.

The winner will receive 6,000 euros ($6,900) and an invitation to Microsoft’s Seattle headquarters. The theme of the Pontifical Academy’ of Life’s s 2020 plenary assembly is AI.

Artificial Intelligence Used To Detect Bullying




Researchers at McGill University in Montreal, Canada, are training algorithms to detect hate speech by teaching them how specific communities on Reddit target women, black people and those who are overweight by using specific words. Abusive speech is notoriously difficult to detect because people use offensive language for all sorts of reasons, and some of the nastiest comments do not use offensive words. Researchers at McGill University in Montreal, Canada, are training algorithms to detect hate speech by teaching them how specific communities on social media target women, black people and those who are overweight by using specific words.

Their findings suggest that individual hate-speech filters is needed for separate targets of hate speech, who is one of those leading the research.

The exercise in detecting online bullying is far from merely academic. Take social media giants like Instagram. One survey in 2017 found that 42% of teenage users have experienced bullying on Instagram, the highest rate of all social media sites assessed in the study. In some extreme cases, distressed users have killed themselves. And it isn’t just teenagers who are being targeted – Queen guitarist Brian May is among those to have been attacked on Instagram.

Instagram is now using AI-powered text and image recognition to detect bullying in photos, videos and captions. While the company has been using a “bullying filter” to hide toxic comments since 2017, it recently began using machine learning to detect attacks on users’ appearance or character, in split-screen photographs, for example. It also looks for threats against individuals that appear in photographs and captions.

Bullying exists offline in many forms. Recent revelations of sexual harassment within major technology firms in Silicon Valley have shone a light on how bullying and discrimination can impact people in the workplace. Almost half of women have experienced some form of discrimination while working in the European tech industry.

Spot – an intelligent chatbot that aims to help victims report their accounts of workplace harassment accurately and securely. It produces a time-stamped interview that a user can keep for themselves or submit to their employer, anonymously if necessary. The idea is to “turn a memory into evidence”, says Julia Shaw, a psychologist at University College London and co-creator of Spot. Another tool named Botler AI goes one step further by providing advice to people who have been sexually harassed. Trained on more than 300,000 US and Canadian court case documents, it uses natural language processing to assess whether a user has been a victim of sexual harassment in the eyes of the law, and generates an incident report, which a user can hand over to human resources or the police. The first version was live for six months and achieved 89% accuracy.

The algorithms developed in this study can fairly accurately address the question of who will attempt suicide, but not when someone will die,” say the researchers.



New Technology Being Developed To Detect Gun Violence Before It Begins



New security technologies are being developed everyday to preventing further gun violence. Athena, a new security technology, uses artificial intelligence to detect a firearm before it is used.

The system touts up to 99 percent accuracy when identifying guns. It can spot these weapons, or those making threatening motions, alerting someone who could prevent them from entering a building and causing harm.

The system connects directly to the security cameras that are already in place at a business or school campus, bypassing any heavy or costly installation. In case of double, the system instantly relays information and can directly alert the police.

The technology will also send real-time footage of an incident to law enforcement agencies, allowing them to know about the current situation before they arrive on scene.

While Athena promises that its algorithms will keep prices down, the main holdup could be that the system has a hard time distinguishing between a real and fake gun, leading to unnecessary alarms.

Tech Trends In Healthcare This Year


The advancement in technology in the area of healthcare has been said to have benefits that are extraordinary for medical practitioners and patients. Healthcare technology is no threat to medical practitioners. However, there should be great wisdom in using these tools.

As seen and started in most hospitals, the health records are being done handwritten on papers and are being stocked on vast piles of folders. These contain essential medical and personal information of the patients. Years have gone by, and the problem on this type of data recording has been a cycle that has never been solved. Thanks to developers of new technology, there will finally be an alternative to handwritten records (sometimes are too hard to figure out). The Electronic Health Records or HER will be replacing paper records and will make life easier for everyone.

Another part of the technology that will soon take over some tasks is the use of Artificial Intelligence for appointment scheduling, health status monitoring, and the notification of medical assistance. Accordingly, Artificial Intelligence or AI is already widely used in radiology and dermatology.

While AI is already slowly being introduced in healthcare, the IoMT or Internet of Medical Things is also being adopted by medical practitioners. IoMT is a set of medical devices and application software that helps determine and monitor issues on patients before becoming critical.

Moreover, the future in medical technology will also be using existing devices and gadgets that people have right now. There will soon be mobile healthcare applications that are capable of facilitating the medical conditions of patients. For example, the Mobile Ultrasound app that is already in use by some doctors.

These trends in healthcare technology are just some of the few things that are to be introduced to the public. The benefits and reviews have also been positive more than negative.

China & Artificial Intelligence

Image result for artificial intelligence

China’s leadership – including President Xi Jinping – believes that being at the forefront in AI technology is critical to the future of global military and economic power competition.

AI has become a new focus of international competition. AI is a strategic technology that will lead in the future; the world’s major developed countries are taking the development of AI as a major strategy to enhance national competitiveness and protect national security. China’s AI policy community is paying close attention to the AI industries and policies of other countries, particularly the United States.

Chinese officials have expressed concerns that AI “will lower the threshold of military action,” because states may be more willing to attack each other with AI military systems due to the lack of casualty risk. Chinese officials also expressed concern that increased used of AI systems would make misconceptions and unintentional conflict escalation more likely due to the lack of well-defined norms regarding the use of such systems. Additionally, Chinese officials displayed substantive knowledge of the cybersecurity risks associated with AI systems, as well as their implications for Chinese and international security.

China has already established two major new research organizations focused on AI and unmanned systems under the National University of Defense Technology. AI is viewed as a promising military “leapfrog development” for China meaning that it offers military advantages over the US and will be easier to implement in China than the United States. China now sees AI as “a race of two giants,” between itself and the United States.

China is advancing the state of the art in AI research, its companies are very successfully developing genuinely innovative and market-competitive products and services around AI applications. Sense Time, for example, is undisputed one of the world leaders in computer vision AI and claims to have achieved annual revenue growth of 400 percent for three consecutive years.

Although China has strength in AI Research and development and commercial applications, China’s leadership perceives major weaknesses relative to the United States in top talent, technical standards, software platforms, and semiconductors.

China’s strengths are mainly shown in AI applications and it is still weak on the front of core technologies of AI, such as hardware and algorithm development, China’s AI development lacks top-tier talent and has a significant gap with developed countries, especially the U.S., in this regard



Artificial intelligence Tool Used To Catch People Lying To The Police


British scientists have created a new computer program that can spot if someone has lied to police about being robbed.

The groundbreaking software analyses the wording of a victim’s statement in order to identify tell tale signs of fake reports.

Spanish police, who have been using the tool, found it was successful in more than 80 per cent of cases helping them to identify 64 false reports in just one week.

Developed by experts at Cardiff University, VeriPol, uses a combination of automatic text analysis and artificial intelligence to recognize when somebody has been lying or exaggerating to the police.

Thousands of false reports are submitted to the police each year with many perpetrators hoping to receive inflated insurance payouts or claims for crimes that never happened in the first place.

Using algorithms the machine is able to carefully analyze various features in the text, such as adjectives, acronyms, verbs, nouns, punctuation marks and numbers.

Experts claim a false statement is more likely to contain certain traits and giveaway signs, that can be spotted using artificial intelligence.

It is believed that false statements are more likely to be shorter than genuine ones and focus on the details of the stolen property rather than the incident itself.

Tag Cloud

%d bloggers like this: