Enterprises need winning strategies in an AI economy and should start with data strategy, data governance and developing data platforms. As the AI economy evolves, automation¹ is critical as businesses scale operations and drive efficiency in a competitive market.
Some people wonder: Will AI destroy or create jobs?
This question brings mixed reactions but let us be clear here: Despite many jobs being automated by AI, new roles will be created and even surpass our expectations.
Digitization of manual labor from AI has taught us that collaboration between humans and technology brings positive results.
Despite the benefits of AI, there is a dark side too: From super-intelligent systems that will threaten humanity, to deep fakes and bots, AI has the potential of causing damage if unchecked.
Good AI depends on clean data and the same is true for automation in journalism based on data collection.
These and more insights on this weekly AI update.
Dark Artificial Intelligence
Dark AI is a blanket terminology that encapsulates any evildoing an autonomous system is capable of executing given the right inputs (biased data, unchecked algorithms, etc). Below is a list of potential #darkAI scenarios, ranging from economic malfeasance to privacy tampering²:
A small number of tech giants could monopolize the AI economy
Algorithms can manipulate our buying and selling patterns and transactions
AI can help widen the gap between leaders and laggards, the rich and the poor
Micro bits of personal data could help concoct sophisticated deep fakes
The above scenarios have the ability to come into fruition given certain malevolent AI applications, such as smart dust and drones, facial recognition and surveillance, fake news and bots as well as smart device listening.
Drones and armies of undetectable smart dust (or tiny micro electro mechanical systems) can work together to obliterate entire power grids and smart infrastructure systems.
Facial recognition grants autonomous systems the right to millions of individuals’ characteristics, which, thanks to cloning and bots, can be mobilized in the form of compromising deep fake images and videos.
Digitization of Manual Labor
Whether AI that digitizes manual labor⁴ makes workers frown or smile will come down to how employers choose to use it.
Denso originated as part of Toyota, which still owns a stake in the company, and like its parent uses the kaizen philosophy of manufacturing, which encourages workers at all levels to participate in improving how a plant operates.
That could lead employees to let workers engage with #data and insights from AI overseers in ways that improve their jobs as well as productivity, says Susan Helper, an economics professor at Case Western Reserve University who studies manufacturing.
However, US plants, even those applying kaizen, don’t have a good track record of allowing workers to have much influence compared with parts of the world with stronger worker protections and culture.
The difficulty of balancing workplace data collection with employee trust convinced New York startup StrongArm to limit the uses of its own worker-tracking #technology.
Machine-Learning on Edge Devices
Cartesiam, a startup that aims to bring machine learning to edge devices powered by microcontrollers, has launched a new tool for developers⁵ who want an easier way to build services for these devices.
The new NanoEdgeAI Studio is the first IDE specifically designed for enabling machine learning and inferencing on Arm Cortex-M microcontrollers, which power billions of devices already.
The company first bet on Intel’s short-lived Curie SoC platform. That didn’t work out all that well, given that Intel axed support for Curie in 2017.
Since then, Cartesiam has focused on their Cortex-M platform.
AI will create more jobs than destroy
Artificial intelligence will either destroy jobs or create new jobs — depending on which report you read.
In truth, it is probably a little of both — AI will undoubtedly replace human workers in some spheres, but it will also create new roles⁶, many of which we cannot yet imagine.
A recent report from PAConsulting, titled “People and machines: From hype to reality,” supports this theory and predicts AI is more likely to create jobs than destroy them.
The research behind this report, based on 750 cross-industry businesses in the U.K., found that 32% of respondents had invested in AI and automation in the last five years, split evenly across tools for cognitive and physical tasks. Of those that had invested, 43% reported an increase in jobs as a result, while 40% reported a reduction.
“Our research shows AI and automation are likely to lead to a net gain in job numbers,” the report reads. “As some types of jobs disappear, new ones will emerge.” These findings are supported elsewhere — the Organization for Economic Co-operation and Development (OECD) also predicts that while AI and automation will certainly impact jobs, there will be no net loss.
Microsoft’s AI for Health
Bill Gates is known for contributing to global health through the Gates Foundation. As his biggest legacy, Microsoft is also moving in the same direction. The company is actively involved in helping the world’s population to overcome some of the most challenging health problems and concerns.
Microsoft recently announced a new #healthcare initiative known as “AI for Health,” a program worth over 40 million dollars. AI for Health⁷ evolved out of the AI for Good Program, which offers general aid to the less fortunate around the world.
The new initiative will primarily focus on health-related problems and the integration of #AI and #tech solutions.
The main goal behind AI for Health is to directly research the impact of AI in the medical field, as well as develop algorithms for automatic detection of diseases and other medical operations.
Besides building new solutions, the program will also work toward making these technologies more available worldwide.
“AI for Health is a philanthropic initiative that complements our broader work in Microsoft Healthcare. We will support specific nonprofits and academic collaboration with Microsoft’s leading data scientists, access to best-in-class AI tools and cloud computing, and select cash grants,” wrote Microsoft’s John Kahan
Super Intelligence and risk to Humanity
Could we wake up one morning dumbstruck that a super-powerful #AI has emerged, with disastrous consequences? Books like Super Intelligence by Nick Bostrom and Life 3.0 by Max Tegmark, as well as more recent articles, argue that malevolent superintelligence is an existential risk for humanity⁸.
But one can speculate endlessly. It’s better to ask a more concrete, empirical question: What would alert us that superintelligence is indeed around the corner?
We might call such harbingers canaries in the coal mines of AI. If an artificial intelligence program develops a fundamentally new capability, that’s the equivalent of a canary collapsing: an early warning of AI breakthroughs on the horizon.
Self driving cars are a canary.
They are further in the future than anticipated by boosters like #ElonMusk. AI can fail catastrophically in atypical situations, like when a person in a wheelchair is crossing the street.
Driving is far more challenging than previous AI tasks because it requires making life-critical, real-time decisions based on both the unpredictable physical world and interaction with human drivers, pedestrians, and others.
Data Gathering Automation for Journalism
Applied XLabs is a new startup building tools that can automate data-gathering for journalists — and eventually, for knowledge workers in other industries.
The company is emerging from Brooklyn-based New lab, with The Boston Globe as its launch partner. It will be led by Francesco Marconi, who was previously R&D chief at The Wall Street Journal and head of AI strategy at the Associated Press.
The plan is for Applied XLabs to develop products to help newsrooms, starting with The Globe, automatically pull data and generate insights.
Applied XLabs is also the first startup to emerge from Newlab’s venture studio program.
AI-informed lead generation
How does Intel, which expects the market opportunity for AI hardware to grow from $2.5 billion in 2017 to $10 billion in 2022, find new customer opportunities?
With AI, Intel detailed a tool its IT Advanced Analytics team developed internally to mine millions of public business pages and extract actionable segmentation for both current and potential customers.
The chipmaker says that its sales and marketing staff have used the new system to discover new leads faster and more accurately than before⁹.
The system focuses on two key classification aspects: (1) an industry segment ranging from verticals such as “healthcare” to more specific fields such as “video analytics” and (2) functional roles like “manufacturer” or “retailer” that further distinguish potential sales and marketing opportunities.
The AI model acquires a constant stream of textual data from millions of sites, updating the multi-million node knowledge graph with gigabytes of data every hour, which then gets passed along to a set of machine learning models for segmenting potential customers.
AI-informed lead generation is fast becoming the norm rather than the exception. Roughly 87% of enterprise AI adopters say they’re using AI and machine learning for sales forecasts and to improve their email marketing.
Deceiving AI Programs and Algorithmic Myopia
Artificial intelligence has made big strides recently in understanding language, but it can still suffer from an alarming and potentially dangerous, kind of algorithmic myopia.
Research shows how AI programs that parse and analyze text can be confused and deceived by carefully crafted phrases¹⁰.
A sentence that seems straightforward to you or me may have a strange ability to deceive an AI algorithm.
That’s a problem as text-mining AI programs increasingly are used to judging job applicants, assess medical claims, or process legal documents. Strategic changes to a handful of words could let fake news evade an AI detector; thwart AI algorithms that hunt for signs of insider trading; or trigger higher payouts from health insurance claims.
“This kind of attack is very important,” says Di Jin, a graduate student at MIT who developed a technique for fooling text-based AI programs with researchers from the University of Hong Kong and Singapore’s Agency for Science, Technology, and Research.
Jin says such “adversarial examples” could prove especially harmful if used to bamboozle automated systems in finance or health care: “Even a small change in these areas can cause a lot of troubles.”
Works Cited
¹Automation, ²Dark AI, ³Data Intensive Computations, ⁴Digitization of Manual Labor, ⁵Developer Tools, ⁶Job automation by AI, ⁷Microsoft’s AI for Health, ⁸AI Super-intelligence, ⁹AI-Informed Lead Generation, ¹⁰Deceiving AI Programs