Firm has been summoned to Ottawa on Tuesday to explain why it didn’t go immediately to police
Article content
VICTORIA — British Columbia Premier David Eby said it “looks like” OpenAI had the opportunity to prevent the recent mass shootings in Tumbler Ridge, B.C., in which nine people died, as pressure piled on the artificial intelligence firm over its handling of interactions with 18-year-old shooter Jesse Van Rootselaar.
Advertisement 2
Article content
The firm has been summoned to Ottawa on Tuesday to explain why it didn’t go immediately to police after its internal safeguards flagged worrisome interactions between the shooter and its ChatGPT chatbot at least seven months ago.
Article content
Article content
Eby — who is also calling for national standards for AI companies on reporting potential threats — said Monday there would be a public accounting by the company to explain why it only reported its concerns to police after the Feb. 10 killings by Van Rootselaar, who shot dead her mother, half-brother, five school pupils and a teacher’s aide, then herself.
“From the outside, it looks like OpenAI had the opportunity to prevent this tragedy, to prevent this horrific loss of life, to prevent there from being dead children in British Columbia,” he said. “I’m angry about that.”
Article content
Advertisement 3
Article content
Federal Artificial Intelligence Minister Evan Solomon said earlier Monday that OpenAI had been called to Ottawa to discuss safety concerns after learning Van Rootselaar was banned from ChatGPT in June.
The company said the problematic activities on the account didn’t meet the threshold for informing law enforcement at the time because it didn’t identify credible or imminent planning.
Solomon said he was deeply disturbed by the reports and he contacted the American company over the weekend to get more information and to arrange a meeting with its “senior safety team” on Tuesday.
“We will have a sit down meeting to have an explanation of their safety protocols and their thresholds of escalation to police so we have a better understanding of what’s happening and what they do,” he said.
Advertisement 4
Article content
Solomon would not say whether the federal government intends to regulate AI chatbots like ChatGPT but added that all options are on the table.
Eby said while he was not trying to rush to judgment, he hoped that the company would clarify its decisions, and the information would be made public one way or the other, either through a coroners’ inquest or a public inquiry.
“We will ensure that the public knows what happened, what decisions were made and why,” he said.
The premier also gave reporters a timeline of his government’s recent interactions with OpenAI, which had coincidentally scheduled meetings with officials about the possibility of opening an office in B.C. on the day of the shootings and the day after.
He said Rick Glumac, B.C.’s minister of state for artificial intelligence, met with the company on Feb. 10 to discuss regulatory issues, while Eby’s principal secretary, Meghan Sali, met the firm’s representatives on Feb. 11.
Advertisement 5
Article content
“During that meeting and the previous meeting, there were no revelations from OpenAI that they had any information to be concerned about,” Eby said.
The meeting on Feb. 10 took place at around 1:30 p.m., shortly before police issued a news release about an active shooter. The meeting the next day took place a couple of hours after Van Rootselaar was named by police as the shooter.
Eby said Open AI contacted Sali for information on how to contact RCMP on Feb. 12, the day after Van Rootselaar had been named.
“The time frame is what makes me angry,” he said.
The Wall Street Journal reported Friday that Van Rootselaar’s account was banned after it was flagged for troubling posts, including some that included scenarios of gun violence. OpenAI said it contacted the RCMP after the killings at Van Rootselaar’s home and at Tumbler Ridge Secondary School.
Advertisement 6
Article content
Eby said his government was strongly encouraging Ottawa to establish a consistent national threshold for AI companies when it comes to reporting individuals plotting and threatening violence.
This was to ensure that something like Tumbler Ridge never happens again, and that no company was able to make decisions on their own about turning a blind eye to problematic conduct.
Gavin Dew, B.C. Conservative Party critic for artificial intelligence, said the public is still learning about what happened, but the families deserve answers.
“So, there is a really important policy conversation that has to be had around how we can balance individual freedom, open information, personal privacy, with the need to make sure that we prevent tragedies like this from happening ever again.”
Advertisement 7
Article content
B.C. Green Party Leader Emily Lowan said the report in the Wall Street Journal “sickened” her. She said Open AI has a responsibility to report credible threats.
“It’s wildly irresponsible that they held this information back, and they must be held accountable.”
Alan Mackworth, a professor emeritus with the University of British Columbia’s department of computer science who focuses on AI safety and ethics, said in a statement that many professionals, such as teachers and doctors, have a “duty to report” any suspected case of harm to or abuse of a minor.
“These obligations are enshrined in law and/or professional ethics. Similar obligations should be placed on social media and AI companies,” he said.
Read More
-
RCMP confirm police contacted about Tumbler Ridge shooter’s ChatGPT
-
Tumbler Ridge survivor update: Maya Gebala undergoes emergency surgery
Article content
More Stories
Liberals accept Tory budget changes on sweeping cabinet power proposals – National
Senate panel says immigration measures should be removed from border bill – National
Accused mass killer’s ex-girlfriend tells trial he confessed