“Some people call this artificial intelligence, but the reality is this technology will enhance us. So instead of artificial intelligence, I think we’ll augment our intelligence.” —Ginni Rometty

Artificial Intelligence is having a moment in the public sphere. OpenAI’s ChatGPT text predictor has led to many competitors and skews that other companies can use to increase production and boost sales. There is a lot of excitement for this emerging technology, but can it improve the Safety Lifecycle?

The Genesis of AI

Artificial intelligence (AI) has been the subject of science fiction for decades. The concept was first described in the paper Computing Machinery and Intelligence by Alan Turing in 1950 and has now started to come into reality. As computers became more commonplace and the hardware more robust, developing different types of software for general industry grew in popularity. Most software relies on human operation to a significant degree and is mostly used to mathematically model a process or perform a list of tasks when prompted (like macros in Excel). However, the current iteration of “AI” includes neural networks that have been trained faster, and on more information, than a human could and have constraints to work with certain prompts. The most popular AIs are chatbot models like ChatGPT, which when given a prompt can generate a response based on the large library of information it was trained on. This allows it to take a variety of inputs and put together detailed responses, not necessarily answers.

I’m Sorry Dave, I Can’t Do That

There are some programmed constraints for current AI models, but some that are innate to them. Programmed constraints include hard stops for the model on specific subjects that can be used to hurt others. These constraints ensure that the model is used productively. The text predictors and other models are also trained with a broad pool of information, so that specific questions may not have enough context to give a specific response. In addition, if incorrect information is included during the model’s training, it can repeat that information in its responses, making them false. Even if the training information is accurate, an AI still may create responses that are inaccurate, as the model is focused on predicting the next words, not the topic itself, especially complex ones. These responses can sound confident and can be taken as fact unintentionally. This would rely on the user to point out inaccuracies so that they can be corrected or updated in the model. The models are also bound by the type of output they can produce (sound, images, text, etc.), which will dictate what type of information they can produce and receive. This may involve training by users of these models to provide the right prompt to end with the desired output. With the proper understanding and skills of certain AIs, it can be possible to use them to aid in the workplace.

AI and Process Safety

Artificial intelligent text predictors can be used to develop written documents. Since most safety documentation has specific requirements and sections, it’s not farfetched that a prompt delivered to a chatbot model could provide a rough draft of a procedure, policy, or program. Tweaks might need to be made to the generated documents, as language models can lead to errors in grammar, and certain requirements in standards may be too general in the documentation needed. Some general industry requirements promulgated by OSHA  have requirements like “a program, inspection, or testing is required” but don’t give examples or detail specifics information that needs to be provided in the documents. That does not mean that text-generating models can’t be used here, rather they require more context from the individual in the prompt. It would be the same as a person new to the safety role at a company who would need specific training to undertake the drafting of the document themselves.

Beyond the Documents

At this stage in artificial intelligence development, text generation is the extent to which a process safety engineer can expect to find a use for it. Document writing can take up significant time if a template or other resources are not available. The AI-based models can only help with certain aspects of the Safety Lifecycle, as documentation has to be implemented in the field. Aspects of safety programs like inspections, hazard analysis, and audits will certainly require human input. However, by having a model to review and check the reports to summarize findings, they can allow for information to be more concise. Elsewise, freeing time for someone working in EHS to spend more time with their team can be invaluable to validating and improving safety culture. Having more time to communicate with those on the floor can aid in fostering better questions and comments about the current safety program and what potential improvements could take it to the next level.

Photo credit: Michael Schwarzenberger via Pixabay

The March of Technology

As technology continues to grow and make greater strides, it is up to the human mind to find new and different uses for the tools that come from it. AI models can be used to take the stress out of brainstorming and developing documents from scratch. They might not be as effective as a group of individuals, but they can be utilized conveniently when others may not be available. Something as simple as “talking” to an AI model can get an individual out of their head to see the task at hand more clearly. There’s a reason the old adage of reading something you wrote out loud helps to discover mistakes. Text predictive models can prove to facilitate a similar experience and can help fuel brainstorming if needed.

How to use AI

Artificial intelligence will continue to present new tools as culture attunes to it. Wherever artificial intelligence goes, it will continue to serve as a potential tool for process safety. It will always rely on human input though, as the concern to protect workers’ lives will be eternal.

Author