The UK Government’s Office for Product Safety and Standards (OPSS) recently published a report on AI and product safety. This is an in-depth report worthy of a full read. We have extracted some high level points from the report, go to the end for a link to download the full report.
Opportunities and benefits
The primary value of AI systems is their ability to perform complex analytical tasks in real-time that would not be possible for humans. The application of AI to consumer products can lead to enhanced safety outcomes for consumers. This can happen on an indirect basis, whereby consumers benefit from enhanced product safety performances through AI-led improvements in their manufacturing processes, or on a direct basis where consumers could benefit directly from the embedding of AI in products, which could identify unsafe product usage or optimise product performance. The ability to predict repair needs is called predictive maintenance, enabling organisations to forecast when equipment will fail so that its repair can be scheduled on time. By establishing when an intervention is needed, predictive maintenance can play a key role in preventing accidents from occurring due to malfunctioning or product failure.
AI may aid the tasks fulfilled by engineers and other professionals by allowing them to input information on restrictions, production methods, material and other variables into an algorithm that can cut their time and effort. The algorithm may then develop only solutions that are safe, allowing designers and engineers to focus on the design aspects.
Challenges and Risks
Transparency and explainability
AI developers are required to make certain decisions, which have a range of trade-offs. One key decision relates to the type of model to employ; simpler, more traditional models tend to be more transparent and interpretable compared to more complex models, but may not be able to achieve the same level of performance. As a result, many AI systems lack transparency and explainability.
Security and resilience
The resilience of an AI system can be tested by cyber security or other adversarial methods that can present a risk. Such methods are characterised by attempts to fool an AI system into misclassifying certain inputs or making incorrect or inaccurate predictions. A study quoted in the report illustrated that neural network powered facial image recognition systems could be fooled into recognising someone as a different person with a high degree of certainty if specially printed multicoloured glasses were used.
Fairness and discrimination
AI systems have been shown to produce discriminatory or inaccurate results, often due to biases or imbalances in the data used to train, validate and test such systems.
Privacy and data protection
The data driven nature of AI systems can lead to privacy and data protection related issues. Inferences made by AI systems have been demonstrated in many environments to seemingly transform non-sensitive data into sensitive personal data and make decisions on that basis. For instance, researchers have demonstrated the ability to use data collected by smartphones to predict clinical depression.
To download the full report, click here.