Artificial intelligence (AI) is a concept already embedded in our day-to-day lives. Your smart phone? AI. Google’s algorithm? AI. Your email spam filter? AI. When we speak about AI, it’s easy to jump to thoughts of the Terminator movies or i-Robot, but is this really a vision of where technology is heading?
AI is an extremely broad subject, but there are some fundamentals that will become common terms as the AI revolution gathers pace, and they’re relevant to EHS.
3 Categories of AI
There are 3 categories of AI:
- Artificial Narrow Intelligence (ANI)
- Artificial General Intelligence (AGI)
- Artificial Super Intelligence (ASI)
We’ve already achieved ANI – that’s a program that can beat the most intelligent human at one thing, like a chess game or the board game “Go,” as recently was achieved.
What lies ahead is AGI, and it might be approaching us much quicker than we think. AGI is classed as a computer system that is as smart as the human brain across the board. In a first-draft style attempt, Microsoft recently created “Tay,” a tweeting AI chatbot that learns from what the online Twitter community is talking about and responds to messages with logical answers… in theory.
Tay almost immediately had to be taken down in March after it (she?) went rogue and generated sexist, racist remarks for all to see. At this point in AI development, it’s easy to underestimate the progress made to date and the impact of progress we will see even in the next couple of years.
Although practical use of this new technology right now can be a little clunky and gimmicky, we are on an unstoppable path where AI will continue to become ever-more sophisticated and integrated into our lives.
How Quickly Will AI Evolve?
As with any technology – the first phone, first car, first flying contraption – it would be hard for the inventors of such technology to imagine the convergence and development of their inventions (and their impact on society). Crucially, though, is that the speed of progress is getting much, much faster. In effect, technologists now can achieve what would have been 100 years of progress at 20th century development rates in just seven years, and the pace of change is quickening still.
What does this mean? Well for starters, huge amounts of change! Change comparable to the Wright Brothers seeing a jumbo jet thunder past within seven years of their first powered flight at Kitty Hawk. A bit mind blowing? You betcha!
I have mentioned ANI and AGI – clearly as Tay, Siri and Cortana demonstrate, we have lots of progress to make to move from ANI to AGI (Artificial General Intelligence). Long before we see Terminator type T-800’s with EHS scanners and hi-vis jackets roaming about construction sites, we will see a huge transformation for health and safety and health and safety law.
But let’s not get too carried away (at this point) and look at a single application of AI: It’s currently predicted that we will see the first consumer cars capable of fully autonomous driving by 2019. Yes, that’s correct; only three years away.
You may have seen in the news recently that one of Google’s driverless cars (an ANI system) in California was involved in a collision with a passenger bus (see Figure 2). A minor one, and luckily nobody was injured, but this brought to my attention to ways AI – especially in its infant stages – could have a significant impact on health and safety. Or, at least, will soon demand new laws be put in place for regulating how safety incidents involving AI are handled. Combine this with the speed of change and its ability to transform the workplace and we have a continual challenge for EHS and the law to keep up.
The wider impact of AI on the workplace is difficult to imagine. It’s predicted that 30 percent of our jobs will be taken over by AI machines by 2025. This could result in a safer working environment, but there will be other issues we just have not anticipated that will impact the EHS manager, occupational safety and health laws and wider society. Just as Microsoft did not predict that humans would “teach” Tay to tweet racist and sexist comments, you can bet that us humans will create havoc in many cases. It will be a first if such widely deployed technology is not hacked, stolen and generally abused by the unscrupulous.
This article has now probably got you thinking not just from an EHS perspective, but from a life-as-we-know-it perspective. ASI is where the sphere of AI becomes the biggest game changer. ASI is defined as an intellect much smarter than the best human brains in every field, and it would reach levels of intelligence humans cannot even comprehend. Most experts believe we are some way off ASI, but once we reach AGI, ASI very quickly could follow.
If you want to read about how this could even be possible, an article by Tim Urban, "The AI Revolution: The Road to Superintelligence," sums it up.
Will a Robot Interpret Near-Misses?
In a word YES, but it’s more of a question of when will it be widely applicable?
Since I’ve already mentioned AI in transport, let's look at the ability of driverless vehicles to detect nearby objects – Google’s cars take notice of cyclists, pedestrians and other hazards by using an on-board scanning system. This technology even can pick up human gestures like a cyclist’s hand signal, and analyzes movements to decide the car's position. If a cyclist is hovering around lanes, the car will hang back. It doesn't seem this level of sophisticated decision-making was present in the California bus incident, but it will get there. AI can “see” in a sense, but to what extent can it apply logic to general processes?In a close call, the significant incident that could have happened didn’t happen, so ANI may have trouble registering it as an event. As humans we have a high degree of sophistication when it comes to logic and intuition, and it’s these natural skills that may prove most challenging to teach – the things we do without thinking. But when (rather than if) AI can recognize any narrowly avoided situation, AI very well could reveal near-misses humans are incapable of spotting. Furthermore, AI will eliminate the barriers that humans experience when logging a close call such as difficulty, embarrassment or peer pressure.
Near-miss reporting is an extremely important player in reducing accident rates. There’s been cases of companies reducing monetary losses by 90 percent as a result of investigating near-misses, so it’s fairly crucial any AI system employed to be self-sufficient can understand the concept.
It’s estimated that for every workplace fatality, there are 300 near-misses – which would mean there were 1.4 million near-misses in the United States in 2014, based on OSHA fatality statistics. Whether these close calls are reported and investigated is the responsibility of individual organizations and is something AI almost certainly could help us with.
Arguably, if sufficiently sophisticated AI systems were carrying out the jobs of humans in, for example, construction, near-misses would occur significantly less often. AI will be efficient, careful and not open to human mistake such as knocking a heavy wrench off a scaffold platform.
Can an AI see like a human can? Can it witness a near-miss and recognise it? Any AI operating in high-risk industries would have to be programmed to understand exactly what a near-miss is. Explaining the concept of “almost what could have been” to a machine seems impossible at this point in time, but we will see huge advances over the decade in this area.
Can AI Recognize Risk?
Assessing risk remains a real effort for human beings. It’s so important in decision-making, yet can be difficult to quantify and measure.
When AGI transpires, it will be fully understanding of what risk is and how to mitigate it. It will be able to calculate the percent likeliness of an incident happening, probably instantaneously, and without any human intervention. ASI will be able to determine lower risk ways of doing things that humans have never even thought of.
The answer to this question is yes, and certain industries already are using hybrid AI to aid in calculating it.
What’s the Situation Right Now?
The case of Google’s driverless car hitting a municipal bus on Feb. 14 tells us a lot about how we’re going to have to adjust. The incident came about because the car sought to avoid some sandbags in a wide lane by moving over to the left, before re-entering the center of the lane and striking the side of the bus.
The Santa Clara Valley Transportation Authority still is investigating the crash, so we’ll have to stay tuned for the outcome. Right now what we need to realize is the potential for this kind of incident to become a lot more commonplace as we move to more sophisticated AI such as driverless cars in our everyday lives.
The question of accountability could become a difficult topic. So far so minor, but we need to ensure that everything is effectively monitored, reported and followed up, just as we should be doing with human-caused accidents. Google admits “some responsibility” for the collision, but is quick to add that the bus driver also is not blame-free. If charges are pressed, this would be the first time we see a prosecution against a car with no driver, and could be an obscure glimpse into the future.
It’s all very speculative, but based on the current AI capabilities we experience throughout the world, EHS is going to have to adapt to these new intelligent machines. We're now seeing driverless trucks due to be tested on UK roads, following Germany's test in 2015 and Daimler’s U.S. license for such a test – a step towards the predictions of 30 percent of jobs being replaced by AI within the next nine years.The poignant thing for me is that the advance of technology is getting quicker – much, much quicker, thanks to exponential growth – and therefore we are not far from the point technology sufficiently will advance to replace the human in harm’s way with a machine. Technology already developed will transform many industries over the next couple of years, as the ability to replicate it becomes increasingly cost-effective. One only needs to look at the advances being made in research centers across the globe to get a glimpse of what the future looks like.
With improvements in battery life, software, hardware and, crucially, costs, there are numerous ways EHS can benefit from the rise of AI in coming years. So, good news for health and safety, but difficult to understand the implications for life as we know it! After all, once AI becomes self-aware and smarter than us, surely it becomes the biggest potential risk to us in all history?
I’ll leave you with an indicative statement – if this all sounds too much like science fiction, it’s worth noting that the World Economic Forum mentions AI in their Global Risks 2015 report:
The rapid pace of innovation in emerging technologies, from synthetic biology to artificial intelligence, also has far-reaching societal, economic and ethical implications. Developing regulatory environments that are adaptive enough to safeguard their rapid development and allow their benefits to be reaped, while preventing their misuse and any unforeseen negative consequences is a critical challenge for leaders.
John Drzik, president of Global Risk and Specialties at Marsh, said: “Innovation is critical to global prosperity, but also creates new risks. We must anticipate the issues that will arise from emerging technologies, and develop the safeguards and governance to prevent avoidable disasters.”
Murray Ferguson is a director at Pro-Sapien Software. Ferguson has been involved in providing business intelligence IT solutions to some of the world's largest companies for over 15 years. He particularly is interested in using modern technologies for improvements in EHS performances, striving to support business processes and promote safety best practice in high-risk industries. He can be reached via email at murray.ferguson@pro-sapien.com or by phone at +44 (0) 141 353 1165.