COVR—Legal risk assessment (LEGARA)—White paper

Research output: Book/ReportReport

Documents

With the advancement of robotics and artificial intelligence, the nature of risks evolves in a way that makes them harder to foresee and mitigate against. While specific risks cannot be as easily foreseen as before, general risks can still be identified and mitigated against. This mitigation may gain from being done at both technical and non-technical levels. The way we address that changing nature of risks matters for generating and maintaining trust in the technology, and for setting the conditions for the widest possible societal deployment of the technology. This is relevant to all the actors in the robotics industry, and even for actors in neighbouring technological industries (IoT, AI, etc.) as public trust erosion in one technology or application would affect all other contemporary technologies. Current standards are not tackling this changing nature of risks, and are more generally unsuited to the current and near future technology. Compliance to those standards, or even to more precise and updated standards might not shield producers from damage claims in case of accidents. We cannot wait for public policymakers to find answers to those challenges and must engage in a self-regulation effort. This white paper thus attempts to explore the way engineers and lawyers deal with new types of risks, and tries to clarify the way lawyers and judges are likely to react when having to deal with accidents and other legal issues related to developmental technologies
Original languageEnglish
Number of pages39
Commissioning bodyTeknologisk Institut
Publication statusPublished - 2021

    Research areas

  • Faculty of Law - collaborative robotics, artificial intelligence, product liability, machinery directive, safety

Number of downloads are based on statistics from Google Scholar and www.ku.dk


No data available

ID: 328694909