The demand for AI continues to increase according to forecasts by International Data Corporation. Enterprises will adopt AI in 2020 with an estimated 16% surge compared to previous years.
Diversity is enabling the growth of AI as companies rely on AI for decision-making with bias incidents reducing according to the IDC report.
The customer experience from AI is growing as enterprises analyze interactions, and respond to queries in real-time.
Automated AI systems are offering customer support, an area humans have faced challenges because of physical limitations. Customization of consumer needs by AI is transforming customer service as companies respond to market changes.
Patterns and Anomalies
Patterns and Anomalies is one of the most widely adopted approaches for AI. Machine learning is particularly good at identifying patterns or finding anomalies or outliers in that data also digesting large amounts of data very quickly. Pattern-matching is an AI application that has broad applicability and repeats often for good reasons.
Patterns and Anomalies of AI use machine learning and other cognitive approaches to learn patterns in the data and discover higher-order connections between that data. Computers excel at recognizing patterns in data, and data is at the heart of AI.
AI systems are able to quickly spot patterns of behavior, actions, input, or other patterns of data. AI is able to detect patterns, and pay attention to a lot more information at a time.
Machine learning learns from determining patterns inherent in the data. Machine learning allows a system to learn over time through examples and data rather than creating a program to tell a computer what to do.
The use of AI to detect fraud¹ is one widely implemented example of pattern or anomaly identification. Predictive typing on a computer or smartphone also powered by AI patterns. The patterns & anomalies pattern of AI take actions in a variety of ways.
Can we bring common sense into AI?
Can we ever get machines to actually understand what they read? That’s a very hard thing.”- David Ferrucci, Elemental Cognition¹⁰
David Ferrucci built a computer that mastered Jeopardy! He’s been attacking a more challenging task since then. The creator of Watson wants to teach AI common sense.
“Does it make sense that Fernando put his plant in the window because he wants it to be healthy? The sunny window has a light and the plant needs to be healthy.”- A question appears on the screen in front of Ferrucci and the question is part of his effort Artificial intelligence system to learn how the world works.
By teaching machines² to acquire and apply everyday knowledge that lets humans communicate, reason, and navigate our surroundings, Ferrucci and his company, Elemental Cognition, hope to fix a huge blind spot in modern AI.
“Common sense is essential for advancing everything from language understanding to robotics. It is central to most of what we want to do with AI.”- according to Ernest Davis, a professor at NYU.
The advances of AI built on a mix of machine learning and big data have given us gadgets that respond to spoken commands and self-driving cars that recognize objects. AI was transformed after Watson’s triumph.
AI Golden Age of Semiconductor Innovation
The semiconductor gave Silicon Valley its name. It is the foundational technology of the digital age that sits at the heart of the computing revolution and transformed every facet of society over the past half-century.
The pace of improvement in computing capabilities has been breathtaking and relentless. Computer chips³ today are many millions of times more powerful than the world’s first microprocessor introduced by Intel in 1971.
Innovation in silicon has entailed further miniaturizing transistors in order to squeeze more. Intel and AMD have thrived for decades by reliably improving CPU capabilities in a process that Clayton Christensen would identify as “sustaining innovation”.
AI introduces a new golden age of semiconductor innovation. The limitless opportunities of machine learning along with increasing demands spurred entrepreneurs to revisit and rethink even the most fundamental tenets of chip architecture.
A new type of chip built on AI will power the next generation of computing. The AI community began to realize that Nvidia’s gaming chips were in fact well suited to handle the types of workloads that machine learning algorithms demanded in the early 2010s. In the past 24 months, five AI chip unicorns have emerged. The race will power the upcoming era of AI.
The potential of quantum computing is massive and will change the future of AI. The applications include everything from cryptography and optimization to machine learning.
IonQ has described quantum computing is a marathon, not a sprint as a quantum computing startup. Strong AI is the idea that a machine could one day understand or learn any intellectual task that a human can.
“AI in the Strong AI sense, that I have more of an opinion, just because I have more experience in that personally,” IonQ¹¹ CEO Peter Chapman told VentureBeat. “And there was a really interesting paper that just recently came out talking about how to use a quantum computer to infer the meaning of words in NLP⁴. And I think that those kinds of things for Strong AI look quite promising.
D-Wave¹² one of IonQ’s competitors argues that quantum computing and machine learning are extremely well-matched.
Three improvements in ML that quantum computing will likely allow according to Chapman:
-The level of optimization achieved with a QC will be much higher as compared to today’s classical computers.
– As QC can work on the problem in parallel, the training time might be substantially reduced.
-The number of permutations will likely be much larger because of the speed improvements of QC.
AI Economist: Policy Evaluation
Economic research has wrestled with designing the best tax policy, but it remains an open problem for decades now. Scientists at Salesforce think AI can help with this issue.
The team led by Richard Socher has developed a system called the AI Economist that uses reinforcement learning — the same sort of technique behind DeepMind’s AlphaGo and AlphaZero — to identify optimal tax policies for a simulated economy.
The relatively simple but promising tool is a first step toward evaluating policies in an entirely new way. A team member Alex Trott says “It would be amazing to make tax policy less political and more data-driven⁵.”
AI found a policy in one early result that was 16% fairer than a state-of-the-art progressive tax framework studied by academic economists in terms of maximizing both productivity and income equality. Blake LeBaron at Brandeis University in Massachusetts used neural networks to model financial markets says “I think it’s a totally interesting idea.”
The AIs converge on optimal behavior by repeating the simulation millions of times. The key is the double dose of AI. Neural networks have been used before to control agents in simulated economies. AI leads to a model in which the workers and policymaker continually adapt to each other’s actions.
Teaching Neural Networks to classify things in stages
The best practice for teaching a machine learning algorithm is to provide all the details at once. The approach is entirely different when a parent is teaching a child.
Researchers at Carnegie Mellon University inspired by this approach to create a new technique that teaches a neural network to classify things in stages.
The researchers first showed the neural network the training data with the final detailed labels to determine this progression of difficulty. They used the confusion matrix to determine the stages of training, grouping the least distinguishable categories together under one label in early stages and splitting them back up into finer labels with each iteration.
The approach almost always led to a final machine learning model that outperformed one trained by the conventional method in tests with several popular image classification datasets⁶. It increased classification accuracy by up to 7%.
The idea behind the approach isn’t new. “Curriculum learning” the practice of training a neural network on increasing stages of difficulty has been around since the 1990s.
The latest approach presented by the paper’s coauthor @Otilia Stretcu at the ICLR recently is different. The majority of deep learning research today emphasizes the size of models.
AI Contact Tracing Apps fighting COVID-19
Ritika Gunnar, IBM’s vice president of data and AI told CNBC Make It, “This outbreak is creating overwhelming uncertainty and also greater demand for AI,”
AI has already been deployed to help tackle the pandemic. Governments employ AI in contact tracing apps⁷ and companies rely on it to support the biggest work from home experiments in history, also hospitals use the technology to diagnose patients.
The demand for AI is only set to rise. AI jobs globally to grow 16% this year according to market research company International Data Corporation. The industry will need more women, in particular, to overcome some of its historic bias challenges.
IBM found the majority (85%) of AI professionals think the industry has become more diverse over recent years, which has had a positive impact on the technology.
86% said they are now confident in AI systems’ ability to make decisions without bias in a survey conducted on more than 3,200 people across North America, Europe, and India.
Lisa Bouari, Executive Director at OutThought AI Assistants¹³ and a recipient of IBM’s Women Leaders in AI awards, said, “To encourage women into the industry and keep them there more needs to be done.”
5G Mega-Trend and AI Applications
AI will definitely be supercharged when it comes to the 5G roll-out. John Smee, who is the VP of engineering and head of 5G R&D for Qualcomm said, “AI is a huge priority. We are seeing transformation happening, with AI going from the cloud to being distributed, such as on the edge or IoT devices.”
Qualcomm has been embedding AI capabilities on its chips in the preparation for this. Qualcomm’s AI engine has applications for cameras, battery life, security, and gaming — allowing for neural network processing.
Sanyogita Shamsunder, VP of 5G Ecosystems and Technology Innovation at Verizon said, “Imagine using 5G and AI⁸ to create realistic human representations that can interact with you in real-time to provide remote counseling or just companionship. In our 5G labs, we are working on AI-based “Digital Human” technology.
Life-like, emotionally responsive digital humans that have personality and character can literally talk face-to-face with users and respond to vocal and facial expressions. They have a digital brain that triggers their facial expressions and responses so if a person shows frustration, they read their emotional state and react with empathy.”
Can AI enhance customer experience more than humans?
AI is now able to help segment the audience and push campaigns, personally write messages, or handle every customer service message. It also improves experiences with customer support and customer service interaction, enhancing returns, troubleshooting problems, to improve the website experience, messaging, and customized offerings.
Artificial Intelligence is being used to understand more about individual customers as well as to track customer interactions through browsing history, email open history, click through rates and other various actions. Companies have to guess which marketing technique is most effective without AI.
Companies can now develop more comprehensive customer profiles by knowing exactly what the customer wants through Big Data and AI-enabled analytics⁹. AI also enables us to understand the customer’s specific needs and wants through algorithms that are tracking their interactions and behavior which helps improve the entire customer experience.
AI-enabled chatbots are becoming increasingly popular because of the ability to begin conversations with customers, providing relevant answers. In fact, AI can help with every touchpoint through the whole customer life cycle, and purchasing experience.
AI Apps improving Children’s Reading Skills
Read Along, an Android app launched by Google taps AI and machine learning to help children learn to read by providing verbal and visual feedback. It’s now available in over 180 countries and in nine languages including English, Spanish, Portuguese, Hindi, Marathi, Bengali, Tamil, Telugu, and Urdu. It first debuted in India.
Apps like ReadAlong could significantly improve children’s reading skills suggested by preliminary research. 92% of parents noticed some improvement in their child’s skills because of the app. Kids earn stars and badges as they learn, practice, and progress with around 500 stories and interactive games in Read Along.
Diya, an in-app assistant demonstrates how to pronounce words and sentences and uses natural language processing to detect whether a student is struggling or successfully reading a passage, it gives them positive and reinforcing feedback along the way.
Parents can create profiles for multiple readers & identify who tap on their photos to learn at their pace and to track their individual progress within Read Along. It also personalized kids’ experiences by recommending the difficulty level of stories and games based on their reading level performance.
The app follows that of narrated children’s stories on Google Assistant.
Comparing the ways children and AI learn about the world
Researchers at Alphabet’s DeepMind and the University of California, Berkeley propose a framework in a preprint paper for comparing the ways children and AI learns about the world. This could help to close the gap between AI and humans when it comes to acquiring new abilities.
Children explore their surroundings more often than adults as recent evidence suggests. DeepMind’s Quake-based learning environment comprising navigation and puzzle-solving tasks for learning agents.
The tasks are modeled after the games children play and require physical or spatial navigation skills. Children are allowed to interact with @DeepMind Labs in the setup through a custom @Arduino-based controller, which exposes the same four actions agents would use: move forward, move back, move left, and turn right.
The researchers attempted to determine two things during experiments approved by UC Berkeley’s institutional review board:
✅Whether differences in children’s exploration exist with respect to unknown environments.
✅Whether children are less susceptible to corresponding too closely to a particular set of data (i.e., overfitting) compared with AI agents.
According to research children are less likely to explore an area in the condition of the dense reward as initial data suggests.
Aidoc’s AI algorithms approved by The U.S. Food and Drug Administration
The U.S. Food and Drug Administration (FDA) has allowed Aidoc’s¹⁴ AI algorithms for “adjunctive” detection of findings associated with COVID19.
The agency’s allowance acknowledges they could be used to prioritize incidental (i.e., non-specific) CT findings tied with COVID-19 infections not to replace traditional COVID-19 diagnostic tests, like serological tests and nasopharyngeal swabs.
American College of Radiology (ACR) and radiological organizations in Canada, New Zealand, and Australia, others assert systems from companies like Alibaba, RadLogics¹⁵, Lunit¹⁶, DarwinAI¹⁷, Infravision¹⁸, Qure.ai¹⁹, also U.S. Centers for Disease Control and Prevention (CDC) recommends against the use of CT scans or X-rays for COVID-19 diagnosis. Aidoc now might play a role in triage by indicating further testing is required.
Sometimes the differences between COVID-19 and common lung infections like bacterial or viral pneumonia can’t be identified even by the best AI systems. But as many as 10% of asymptomatic patients undergoing CT scans for other conditions were discovered to have COVID-19 as per recent studies.
Aidoc claims that its products have analyzed over 1.2 million scans and can reduce report turnaround times by up to 60.1%.