When it comes to AI use in the classroom, in your professional lives, how do we determine how much is too much?
AI use becomes too much when it replaces our critical thinking. AI should be a tool, not a crutch.
Where are the grey areas in use ethically, creatively, etc?
In terms of ethical grey areas, there are a few. Where exactly is the line between AI help and plagiarism? Do the students who have access to better AI have an unfair advantage? Is the use of AI checkers ethical? Is it fair to have an algorithm grade creative, academic assignments? Is it ethical to use AI in hiring practices?
How do we communicate those issues in our policies or our guidelines adequately? Communicating these issues in policies and guidelines requires at least an adequate understanding of the purpose and capabilities of the technology in question. Each of these questions relies on understanding of what AI could do in each scenario.
Could AI as an accessibility tool be impacted in how we approach our policies and guidelines?
Policies and guidelines should always be crafted with fairness in mind. This means that policies should recognize the nature of AI as a tool when used properly. AI can be used as a tool for accessibility. This could involve accessibility to education or for those with disabilities. Polices can and should be impacted by that fact.
How will current and future jobs be impacted by these policies?
Current and future jobs will have some restructuring and perhaps even growing pains as AI increasingly finds its way into the workforce. It is my belief that once various stigmas around the use of AI are removed and the performance of AI in various professions slightly increases, then we will begin to see these polices manifest. We will see a more efficient workforce and field of academia that can do a lot more than before AI. This starts with transparency in AI usage as Tang et al. discusses. Specifically, this starts with the “declaration of generative AI usage” (Tang et al., 317) in academic and professional settings.