After attending a few concurrent sessions during the American Society of Safety Professionals (ASSP) Safety 2025 in July on artificial intelligence (AI) and how the safety profession is using AI or can use AI as well as talking with fellow Safety Professionals that are in different stages of learning about AI and how it can be used, I have begun my efforts to better understand how artificial intelligence (AI) is being used and can be used by Safety Professionals.

I have compiled the information in the next few articles on AI using Microsoft © Copilot. I used this Microsoft program as I have a subscription for my company. I entered a series of questions and prompts and the information below is a summary of the information provided to me.

It first must be understood as to where the data comes from that the artificial intelligence (AI) platforms use and analyze. AI learns by studying a lot of data or datasets, similar to how a child learns language by hearing people talk. The data comes from many different sources. Some of the common sources of AI data include websites and articles that are public content like news stories, blogs, Wikipedia, and forums help AI learn how people write and communicate. Some AIs train on books or scientific studies that are shared publicly. Posts and conversations (only when allowed and anonymized) can help AI understand trends or how people express themselves. AI can study pictures and sounds to learn what things look and sound like, recognizing faces or understanding speech. Then there are official data sets that organizations, governments, and universities share that are collections of data on topics like weather, health, finance, and more.

It is important to think about data privacy. Not all data is fair game. Responsible AI systems are designed with rules to avoid using private or sensitive information. The best AIs are trained using content that’s either public, shared with permission, or stripped of personal details.

Another way to think of AI is that it is a very fast reader that learns by absorbing data from datasets from books, websites, and other sources that are okay to use. It doesn’t think or feel—but it gets smarter based on the patterns it sees.

Data and datasets are a collection of facts, numbers, or examples about something—like a spreadsheet filled with information about weather, books, customer purchases, injury statistics, incident reports, productivity information, and any other information that can be collected and stored in a database. There are open datasets and closed datasets.

Open datasets are available to anyone who wants to use them. It is free and easily accessible online. They can be downloaded, shared, and used for research, apps, or learning and are often released by governments, universities, or nonprofits. Some examples are U.S. Census data, global weather records from NOAA, COVID-19 statistics from the World Health Organization, NASA’s space images and mission data. Another way to think of open datasets is that they are like a public library, everyone can walk in and read the books. Open datasets help drive innovation, education, and public research.

Closed datasets are accessible only to certain people or organizations that can use them. It is protected by privacy rules, business secrets, or copyrights and may require permission, payment, or special agreements to access the closed datasets. These are more often used by companies or hospitals to protect sensitive information. Some examples of closed datasets include customer data held by Amazon or Google, medical records stored by hospitals, internal financial spreadsheets in a business, private surveys or research done for profit, a company’s internal injury or incident statistics, a company’s internal learning management system (LMS), a company’s production information, or a company’s customer list. Think of closed datasets are like a personal diary, not for public reading without permission. Closed datasets help protect sensitive information and prevent misuse.

The second thing to understand are the two types of AI systems, one is open AI and the other is closed AI. These terms are not about specific companies (like Open AI). They describe the accessibility and transparency of an AI system.

Open AI is the design, code, or data behind the AI is available to the public. People can see how it works, test it, or even improve it. Examples of open AI are an open-source library (TensorFlow) developed by Google used for building machine learning models, or an open-source machine learning library (PyTorch) created by Meta (Facebook) used in research and production for deep learning tasks, or an open-source computer vision library (OpenCV) used for image and video analysis.

Closed AI is the design, code, and data are kept private by the company that made it. You can use the product, but you don’t know exactly how it was built. Examples of closed AI include ChatGPT (by OpenAI) or Microsoft Copilot which are closed and not publicly available for modification, Google Assistant or Amazon Alexa which are smart assistants that use proprietary AI models that aren’t open to the public, or BloombergGPT which is a financial AI model trained on Bloomberg’s private data, used internally for financial insights and analysis.

Iti is important to understand if you are using open AI or closed AI that open AI lets you (or developers) inspect how the system works and where the data comes from, even allowing you (or developers) to understand how it works which is good for building trust in the information. Open AI systems allow developers to build new things on top of existing work. However, closed AI protects intellectual property and may reduce risks from misuse or tampering.

In part 2 of this series of articles, I will explore how AI is changing occupational safety and some of its uses by safety professionals.

This is part 1 of a 3-part series of articles briefly summarizing the basic information I have begun to accumulate as I continue my journey to better understand how artificial intelligence (AI) is being used and can be used by Safety Professionals. The second part will be posted mid-August and the third part will be posted at the beginning of September. I am learning that as safety professionals responsibly embrace AI, understanding its many uses will improve workplace safety.

For more information and/or assistance, contact:
Wayne Vanderhoof CSP, CIT
Sr. Consultant/President
RJR Safety Inc.

Categories: Blog