CCI staff share recent surveys, reports and analysis on risk, compliance, governance, infosec and leadership issues. Share details of your survey with us: editor@corporatecomplianceinsights.com.
Estimate: AI to save 12 hours per week by 2029
Knowledge workers remain optimistic about the impact artificial intelligence (AI) will have on their workflows, according to a Thomson Reuters report that predicts professionals will save as many as 12 hours per week by the end of this decade.
Thomson Reuters’ “Future of Professionals” report, based on surveys of more than 2,200 professionals working across legal, tax, risk and compliance globally, also indicates that over the next year, the average knowledge worker expects AI to save them four hours per week, the equivalent of adding an additional colleague for every 10 employees.
A few other key findings:
- 77% of professionals believe AI will have a high or transformational impact on their work over the next five years. Additionally, 78% say AI is a “force for good” in their profession.
- 79% of professionals predict significant or moderate improvement in innovation within their companies over the next five years. Over that same period, they anticipate 56% of work will utilize new AI-powered technology.
- Just over half of respondents (51%) said AI will help improve work-life balance, while 57% expect greater opportunity for continual skill development.
Report: 62% of companies don’t have full faith that risk monitoring meets contractual & regulatory requirements
Almost two-thirds of businesses surveyed (62%) do not strongly believe their risk monitoring program is meeting contractual and regulatory requirements, according to a small survey by third-party risk management provider Supply Wisdom.
The company’s “Risk Management in a Technology-Driven World” survey provides insights into how companies are tackling risk assessment within their supply chains, the types of risks they’re prioritizing and their use of technology like AI to monitor risk.
A few other key findings:
- Nearly 80% of respondents view technology as very or extremely important in risk management programs, but a majority (57%) aren’t yet using AI in risk assessment.
- The top risk types monitored for are financial risk (65%), operations risk (64%), compliance risk (51%) and cyber risk (51%).
- North American companies use fewer third-party and Nth-party vendors than European companies; 47% of European companies reported vendors in as many as 49 countries, compared to 22% of North American companies who said the same.
Survey: Only one-third of corporate leaders believe AI policy will establish necessary guardrails
As debates over how to regulate artificial intelligence heat up, new research reveals concerns over current and future policy effectiveness — and a lack of compliance readiness. Only about one-third of corporate leaders believe current regulations governing artificial intelligence (AI) are very effective and that future policies will provide the necessary guardrails, according to a survey by Berkeley Research Group (BRG).
BRG’s “Global AI Regulation Report,” informed by survey responses from more than 200 corporate leaders and executive-level lawyers, paints a picture of the expected effectiveness of AI regulation and keys to the development of effective AI policy.
A few key findings:
- Only 36% of respondents feel strongly that future AI regulation would provide the necessary guardrails. About one-third of respondents believe current policy is “very effective” — roughly the same proportion who believe it is “moderately effective” or “slightly/not effective.”
- Only four in 10 are highly confident in their ability to comply with current regulation and guidance.
- Less than half of all organizations have implemented internal safeguards used to promote responsible and effective AI development and use.
Survey reveals AI training gap among government fraud investigators
AI is supercharging financial fraud, challenging federal and state agencies to identify and combat the bad actors behind these increasingly sophisticated crimes. Despite this, few government investigators have been trained recently on AI, according to a post-event survey by financial investigation software provider Personable.
The company’s AI summit, conducted in May in Washington, D.C., brought together public sector decision-makers and fraud investigators and surveyed attendees on where government agencies stand in their deployment of AI.
A few key findings:
- 74% of attendees had not received relevant AI training in the past year, highlighting a significant skills gap.
- Despite a lack of prior training, 79% of attendees expressed significant interest in acquiring AI skills specific to financial investigations.
- 82% of attendees face the challenge of reducing costs associated with time-consuming investigative processes.