Institutional Review Boards for Artificial Intelligence?

Drawing on the history of ethics in medicine, perhaps Institutional Review Boards (IRBs) could be how we ensure AI is ethical and good.

2 Likes

I foresee serious problems getting industry and commercial AI users to comply with anything like medical ethics. As Sci-Fi writer David Brin has suggested, we may need personal AI to defend ourselves from AI abuses.

4 Likes

I agree. I’d expect significantly less transparency in AI than in medicine. Trade secrets, NDAs, etc, etc – and a lack of of the strong regulation that forces transparency on big business in the medical field.

This regulatory climate could change if there was sufficient political will – but in today’s political climate in the US I don’t see it. Big Tech can afford the best lobbyists, and there isn’t much of a grassroots fear of AI.

2 Likes

When AI is applied to maximize company profits, it will do so ruthlessly. Even when constrained to “be nice to people”, it will find loopholes that may not be so nice for everyone. The trick will be to make ethics profitable.

3 Likes

In my LinkedIn notifications today. The future is here.

I do think something similar to an IRB would be a good idea for AI. It’s interesting to me, having been on an IRB, to see the large push towards big data and analytics in higher ed without really any of the same guidelines we put on research (i.e. the IRB process). We have FERPA and HIPPA, but those are centered on privacy, not whether the “research” we’re doing is ethical or a “good idea” in some sense. All the textbook publishers are now clearly angling for an AI war trying to create the best adaptive assessment systems and most “powerful” analytics. So as a teacher what is my obligation to my students concerning their academic data? Some schools are tracking every place a student goes, every assignment they do, every post they make, trying to spot signs of “risk”. At some point is that all just too much?

I agree with @Dan_Eastwood and @Tim that it seems very difficult to enforce and IRB on AI companies. With human research the motivation is a combination of journals requiring IRB approval before publication and institutions not wanting to get sued. It may take a similar level of “threat” before the AI industry decides an IRB is a better alternative.

2 Likes

This brings to mind two questions:

  1. To what extent does ‘publish or perish’ encapsulate the forces driving momentum in AI research? My impression was that it was more competitive commercial and strategic interests at play.

  2. To what extent do breach of privacy (or other applicable) lawsuits impinge upon researchers making use of data, or even middle-men aggregating it, as opposed to those collecting it directly from the public?

I’d like to see some ethical constraints on such activities (having read a large amount of dystopian cyberpunk in my 20s, demonstrating the downside of a lack of them), but have a suspicion that researchers may be largely insulated from the market forces underlying many of them.

1 Like

My second point of unethical researchers being potentially insulated from (lawsuit) consequences of their actions by middle-men is illustrated by this:

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.