Enterprise CIOs are rethinking their goals as the coronavirus pandemic continues with investments shifting towards artificial intelligence, cloud computing, and cybersecurity. The remote work model is influencing decisions as organizations explore options for smooth operations despite the current pandemic.

The Adobe report found 84% of companies using remote work models with small businesses adoption at 94%. From using AI investments and implementation, there is more to come in 2020 as the world fights the coronavirus.

The development of artificial intelligence ethical guidelines will safeguard against the unfair and dangerous use of AI as the boom continues. Many people ask this question: Do these ethical guidelines influence decisions? To answer this question, it is important to understand how ethical guidelines address loopholes amid the current adoption of artificial intelligence.

The release of GPT-3 is the current hype on the internet as users praise the language prediction model for impressive performance. Open AI released GPT-3 with intelligent features including creating songs, technical assistance, and developing stories. Will GPT-3 live to the hype?

These and more insights on our Weekly AI Update

Remote Workforce shifting CIO Priorities

The COVID-19 pandemic has shifted the top concerns of enterprise CIOs¹, who are now prioritizing their spending on areas such as cybersecurity, public cloud, infrastructure and AI/ML. That is according to a new survey from Adobe, who teamed up with Fortune to measure “How A Remote Workforce Is Shifting CIO Priorities.”

CMO by Adobe, which produces advice, guidance, data and research to senior business leaders, conducted a mid-March survey of more than 200 CIOs to gauge shifting attitudes subsequent to a pre-pandemic January post about CIO priorities. Although security was a top-of-mind concern in both survey efforts, cloud computing is now getting more attention in a remote-work world². However, while most organizations use the cloud, that usage may be increasing in the new world order.

AI Still Fairly New: Only 50 percent of organizations use Artificial Intelligence³ in one or more projects (and only 25 percent of SMBs), with CIOs noting they leverage AI for IT and customer support the most. More than 90 percent of those that have implemented AI have only done so in the past year. Top challenges faced when implementing AI are around data and funding/talent.

Overview of Artificial Intelligence Ethics

Current advances in research, development and application of artificial intelligence systems have yielded a far-reaching discourse on AI ethics⁴.

In consequence, a number of ethics guidelines have been released in recent years. These guidelines comprise normative principles and recommendations aimed to harness the “disruptive” potentials of new AI technologies.

The current AI boom is accompanied by constant calls for applied ethics, which are meant to harness the “disruptive” potentials of new AI technologies. As a result, a whole body of ethical guidelines has been developed in recent years collecting principles, which technology developers should adhere to as far as possible.

However, the critical question arises: Do those ethical guidelines have an actual impact on human decision-making in the field of AI and machine learning? The short answer is: No, most often not.

Currently, AI ethics is failing in many cases. Ethics lacks a reinforcement mechanism. Deviations from the various codes of ethics have no consequences. And in cases where ethics is integrated into institutions, it mainly serves as a marketing strategy. Furthermore, empirical experiments show that reading ethics guidelines has no significant influence on the decision-making of software developers.

GPT-3: Third-Generation Language Prediction Model

The non-profit artificial intelligence research company OpenAI, which is backed by names like Peter Thiel, Elon Musk, Reid Hoffman, Marc Benioff, and Sam Altman, has released GPT-3, the company’s third-generation language prediction model. The release of GPT-3 has been met with extreme hype from some of the early users.

GPT-3⁵ is the largest language model ever created and is capable of generating text that is indistinguishable from human text in many cases. #OpenAI described the language prediction technology for the first time in a research paper back in May. Last week, some people were given early access to the software through a private beta.

OpenAI is relying on outside developers to learn more about the technology and what it is capable of, and the company has plans to go commercial by the end of this year. Businesses will be able to pay for a subscription to use the AI.

GPT-3 has proved to be the most powerful language model ever created. Evolving from the previous GPT-2 model, GPT-3 was released last year. GPT-2 was also extremely impressive, being able to create competent strings of text after being provided an opening sentence.

#GPT3 has 175 billion parameters, which increased from GPT-2’s 1.5 billion, and the AI has been demonstrated to create short stories, songs, press releases, and technical manuals. Not only can the technology create stories, but it can do so while using language that is relatable to specific writers. The technology only required the title, author’s name, and the initial word. GPT-3 is also capable of generating other text like guitar tabs and computer code.

Animal Snout Recognition in China

Forget Facial Recognition — China is Developing Animal Snout Recognition⁶. Ant Financial’s online payment provider Alipay has developed a new feature on its app that can recognize animal nose prints, said to be the first application of such technologies in China. Similar to human fingerprints, animal nose prints are unique and can accurately identify an animal over 99% of the time. Using this identification method, Alipay has partnered with insurance companies to provide insurance services for pet cats and dogs.

On the Alipay app, users can upload photos of their pets to create a digital profile and buy an insurance plan. For cats and dogs between the ages of three months and ten years, Alipay offers three health insurance premium options ranging from 199 to 799 yuan that can insure a pet for up to 20,000 yuan in medical bills annually. When owners wish to claim insurance benefits for a pet, its nose print will be used to verify its identity.

Pet insurance has been around in China for over 10 years, but has failed to gain popularity due to trouble proving a pet’s identity. Less than 1% of pets in China are covered by insurance, compared to 25% in the UK and 7% in Japan. But the market for pet insurance is larger than ever: in 2019, pet-related products and services in China have developed into a 202 billion yuan industry serving almost 100 million pet cats and dogs (and their owners). Alipay believes that its new feature will speed up acceptance of pet insurance in China.

European Start-ups advancing Artificial Intelligence

There’s little arguing that artificial intelligence is one of the hottest sectors of the tech industry. From recommending personalized content⁷ in your YouTube feed to translating text, diagnosing cancer and driving cars, the number of domains that the AI industry is touching is constantly expanding.

With so much hype and money surrounding the AI industry, it is as good a time as any to be an AI company. In 2018, billions of dollars were invested in AI companies, and by 2030, the industry is projected to be worth more than $15 trillion according to research firm PricewaterhouseCoopers. Currently, TNW is part of a cutting-edge AI program sponsored by the European Commission, named Data Market Services, aiming to advance and support European data-centric startups.

But not all companies that claim to use AI are actually leveraging the technology. Many companies now make vague claims of using AI in their products and services to secure funding or attract customers. A recent study by London-based venture capital firm MMC found that out of 2,830 European startups classified as AI companies, only 1,580 accurately fit the description.

So without further ado, here are some of the European startups we think are leading promising efforts in the field:

German Autolabs

Qucit

Merantix

Braingineers

Understand AI

Human-Level Artificial Intelligence

You will not find any comprehensive data on humans outside of the testimonials at the Darwin Awards, but stupidity is surely the biggest threat to humans throughout all of history. Luckily, we are still the smartest species on the planet, so we have managed to remain in charge for a long time despite our shortcomings.

Unfortunately, a new challenger has entered the arena in the form of AI. And despite its relative infancy, artificial intelligence is not far from challenging our status as the apex intellects as you might think.

The experts will tell you that we are far away from human-level AI (HLAI). But maybe no one knows what the benchmark for that would be.

Trying to define what HLAI should and should not be able to do is just as difficult as trying to define the same for an 18-year-old human. Change a tire? Run a business? Win at Jeopardy?

This line of reasoning usually swings the conversation to narrow intelligence versus general intelligence⁸. But here we run into a problem as well. General AI is, hypothetically, a machine capable of learning any function in any domain that a human can. That means a single GAI should be capable of replacing any human in the entire world given proper training.

Humans do not work that way however. There is no general human intelligence. The combined potential for human function is not achievable by an individual. If we build a machine capable of replacing any of us, it stands to reason it will.

Limitations of Deep Learning and Transfer Learning

Today, artificial intelligence programs can recognize faces and objects in photos and videos, transcribe audio in real-time, detect cancer in x-ray scans years in advance, and compete with humans in some of the most complicated games.

All these challenges were either thought insurmountable, decades away, or were being solved with sub-optimal results. But advances in neural networks⁹and #deeplearning, a branch of AI that has become very popular in the past few years, has helped computers solve these and many other complicated problems.

Unfortunately, when created from scratch, deep learning models require access to vast amounts of data and compute resources. This is a luxury that many cannot afford. Moreover, it takes a long time to train deep learning models to perform tasks, which is not suitable for use cases that have a short time budget.

Fortunately, #transferlearning, the discipline of using the knowledge gained from one trained AI model to another, can help solve these problems.

In some domains, such as teaching AI to play games, the use of transfer learning is very limited. Those AI models are trained on reinforcement learning¹⁰, a branch of AI that is very compute-intensive and requires a lot of trial and error. In reinforcement learning, most new problems are unique and require their own #AI model and training process.

But all in all, for most deep learning applications, such as image classification and natural language processing, there’s a likely chance that you’ll be able to shortcut your way with a good dose of clever transfer learning.

Neuralink: Brain-Computer Interface

The mission of Neuralink has never quite been clear. We know it is working on a chip designed to be surgically inserted into the human skull called a brain-computer interface. However, what and who it is for remains a bit of a mystery.

As best we can tell based on what’s been revealed so far, it’s shaping up to be a terrifying hormone hijacker capable of potentially giving you forced mental orgasms or making you fall in love. Musk originally said the goal of Neuralink was to produce a BCI so that humans would not lose their competitive edge to AI. The big idea here is that keyboards and other peripherals are not as efficient as a direct thought-to-action interface.

Musk thinks this is going to help us out if a superintelligence¹¹ rises up against us. But the path to jamming spikes in people’s skulls in order to assume control of at least some of their natural motor functions is a bit different than, say, getting permission to build a tunnel under Las Vegas — same concept, different authorities.

That is probably why Neuralink quickly pivoted to medicine. Musk and company currently claim Neuralink will eventually “solve a lot of brain/spine injuries” and treat mental illnesses and cognitive disorders.

We are not expecting much out of Neuralink unless it is ready to either commit to building an invasive medical device aimed at neurology patients, or a non-invasive consumer device. Either way, we should all have more information soon.

Scaling Deep Learning Systems

Deep learning has reached the end of it’s rope according to a group of researchers from MIT, Underwood International College, the MIT-IBM Watson AI Lab and the University of Brasilia who recently conducted an audit of more than 1,000 pre-print papers on arXiv.

The researchers claim they have run out of compute basically and could soon reach a point where it is no longer economically or environmentally feasible to continue scaling deep learning systems.

This might come as a shock to TensorFlow users and AI hobbyists running impressive #neuralnetworks on GPUS or home computers, but training large scale models is a power-intensive, expensive proposition. Clever algorithms and dedicated hardware can only take things so far.

If, for example, you want to train a huge state-of-the-art system like OpenAI‘s big bad text generator, GPT-2, you will be spending a lot of money and potentially doing some serious damage to the environment.

Based on current trends, the researchers feel we will soon reach a point where achieving further benchmarks — such as reaching higher accuracy with ImageNet — will no longer be cost-effective under the current paradigm.

The field of AI has been staring down the barrel of this gun for a long time. Arguably, #machinelearning algorithms have been held back by compute since the 1950s. Thanks to a few modern tricks, we’ve enjoyed a spurt of growth for the past decade or so that’s led to one of the most exciting periods for technology in human history.

The MIT researchers believe we’ll come up with better algorithms and “other machine learning methods,” to solve our power struggle. Perhaps most interestingly, they also speculate that quantum computing could help bushwhack a path forward.

Works Cited

¹Enterprise CIOs, ²Remote-Work World, ³Artificial Intelligence, ⁴AI ethics, ⁵GPT-3, ⁶Animal Snout Recognition, ⁷Personalized Content, ⁸Narrow Intelligence versus General Intelligence, ⁹Neural Networks, ¹⁰Reinforcement Learning, ¹¹Superintelligence