Artificial intelligence in Business Education

Artificial intelligence in Business Education
Published on 24 February 2023

Students have been searching the web for answers to academic questions since the advent of the search bar. People started asking their phones, their kitchens and anything with a microphone and an assistant for answers to the questions we sought. It was rather unlikely that a smartphone assistant was going to write your thesis for you or answer a question on the profit/ethics trade-off between deciding to outsource customer service or not. 

A lot of discussion has been occurring on the web as well as our office around open language models for dialogue (OLMD) and how it applies to business education. OLMD enables someone to ask a question and have it answered with a unique response that is 'written' by the artificial intelligence (AI). In simple terms it is like asking your search engine a question. The difference is that the OLMD does more than just present you with snippets from text that is published in websites related to your question. Instead it assimilates from several sources, including the web for an answer. 


 Could AI write a dissertation?
This immediately raises the question of validity of the answers. The way search engines work is by ranking content based on the match of keywords in the question to the content of a page then ranks based on things like the most views, relevance of keywords on the page to keywords of the search etc. Most views is popularity. Keyword matching is subject to what researchers call ‘confirmation bias’. In short, it offers the most people believe to be true; popular opinion not fact. To assess accuracy there has to be some fact checking involved. The performance of academics as teachers is assessed not only on how they impart their knowledge but how they assess the knowledge of their students. If teachers assessed coursework based on popularity rather than validity the institution of higher education would become an echo chamber of self validation. 

OLMD's therefore pose more of a threat to search engines than to educators in terms of answering questions and imparting knowledge. Apparently one passed an MBA exam at a reputable university. Upon closer inspection of the exam it was mostly short questions, multiple choice and quantitative questions. All questions designed to be as objective when it comes to marking. When compared to how the AI answers questions that require more evaluation and long form responses to managerial economics in another experiment the results were less impressive.

Business is ultimately a subject of doing, not of writing. Successful business people are measured by the success of how their businesses perform, not their written business plans and carefully laid out business models. Higher education assessment involves a lot of written assessment however. This leads to the inevitable application of OLMD supporting students to write their assignments, not writing assignments for them. Secondly it adds fuel to the debate of how business students are assessed in general. 


Will developers like OpenAI of ChatGPT work with academic institutions or against them?
As with ‘Googling questions’ AI is more likely to support student performance on the lower ends of the spectrum of student progression but can perhaps have new ‘brainstorming’ applications on the higher end, like a writer suffering from writer's block. However there is a line between support and plagiarism which is at times hard to define. As such, there will be counter-developments to prevent or regulate the level of use as there has already been for years with plagiarism checking software like Turnitin. A possible hole in that tried, tested and true 'catch the copy/paster' is that OLMD's provide uniquely worded responses for the user asking the question. This leads to another inevitable question. 

edubook - group of students working

Working together to prevent plagiarism could be as simple as offering colleges and universities a service that checks if a written piece of work was written by their algorithm, a problem solved with regards to the traditional model of teaching and assessing. In a study by Mark Huxham (2010) into the performance and attitudes of students assessed verbally compared to written assessments, students considered oral assessments to be more inclusive than written ones. Oral assessments were also found to act as a powerful tool in helping students establish a ‘professional identity’ by the researchers. Although the study was not on business students specifically, the relevance is obvious. One has to wonder if students will reap the same benefits when having a dialogue with AI through text instead of their lecturer or peer verbally, be it online or offline. 


Will OLMD’s catalyse innovation in the how business subjects are assessed?
Within a European project on Networked Interaction in Foreign Language Acquisition and Research (NIFLAR), a 3D virtual world was used for language students to communicate in real-time with native speakers in the target language, while undertaking different tasks together. In group work exercises like this peers are informally assessing each other and providing feedback.

This experiential learning cycle is completed in Edmundo business simulations through our games. Where we go a step further is in capturing evidence of engagement which creates the foundation for assessment and implementation of the feedback; our support of both students and academics during the game. The open dialogues we keep with students and academics are contextualised to the individual classroom, rather than the specific subject or topic dialogue OLMD's are capable of doing. 

OLMD's most interesting application may be as an improved 'assistant' for students working on an assignment and a 'plagiarism enforcer' for academics. Whether this a dialogue between student and AI is more like Einstein discussing his theories with his first wife, a physicist and mathematician herself or like Darwin discussing his theories with his wife who was a musician is open to individual interpretation. Whether the quality of the debate in either cases is mirrored by OLMD's is yet to be seen.

Final Verdict

The verdict does seem to be out as AI being a step up from "Googling" a question, an obvious threat to the search giant. However the threat is addressed it is unlikely that search engines or academic institutions will be ranking AI created content higher than original work. The question is if it will be allowed in either contexts as an efficiency tool like a spelling checker rather than a fact checker like a journalist or academic.