The browser you are using is not supported by this website. All versions of Internet Explorer are no longer supported, either by us or Microsoft (read more here: https://www.microsoft.com/en-us/microsoft-365/windows/end-of-ie-support).

Please use a modern browser to fully experience our website, such as the newest versions of Edge, Chrome, Firefox or Safari etc.

What about AI, Burak?

Illustration of a man and AI symbols.

Burak Tunca is director of AI at the Department of Business Administration. He shares some insights into large language models and his work in this fairly new role.

He is responsible for overseeing developments in the AI space and evaluating how teaching and research activities at the department can benefit from integration of those developments. 

“At the moment, we have two main questions: First, what is the best way of providing access to advanced AI models and features for all our employees and students to ensure equity? And second, how can we improve efficiency and productivity at the department with agentic AI capabilities, such as task automation and virtual assistants,” says Burak Tunca.

 As far as he can determine most universities in the Nordics have chosen to embrace rather than ban AI. 

“It is a transformative technology that is going to stay with us in the years to come, and it is already a skill that employers are looking for. Universities have quickly figured that banning AI will not be beneficial for anyone. On the other hand, we still see misuse of AI by students. For example, there are university wide cases where students submitted essays with references that do not exist,” Burak ads. 

At the department of Business Administration they are committed to further integrate AI into the teaching and learning. For example, this year the master students in International Marketing and Brand Management are offered two brand-new courses titled "AI-driven Digital Marketing" and "AI-assisted Market Intelligence", specifically developed to address the need for further AI competence development. In addition to that, there will be AI-related faculty-wide seminars, workshops, and webinars that all students can benefit from throughout the semester. 

What kind of AI skills are employers looking for?

“We are often in touch with the industry, and frankly speaking, I have never heard any company clearly articulating what they expect from new graduates when it comes to AI competences. This stems from the lack of AI-integration in industry. Yet, for virtually all employers, it is a skill they want new employees to bring with them. They want curious graduates who experiment with AI in different ways. It is an evolving technology, so I think what employers are looking for are graduates who are agile and quick to adapt to new developments,” Burak Tunca explains.

 He himself, uses AI every day for different things, but mostly experimenting, he says. 

“I try new models and tools constantly, evaluating them for teaching and research. I strive to keep my finger on the pulse as best as I can,” Burac concludes.

 

TermBrief Explanation
Large Language Model (LLM)A deep learning model trained on massive text data to understand and generate human language.
PromptThe specific input text, question, or instruction given to the LLM to elicit a response.
System PromptA set of high-level instructions (often invisible to the user) that defines the LLM's role, personality, and rules of engagement.
Generative AIA broad AI category that creates new, original content (text, images, code, etc.).
AI HallucinationWhen an LLM generates false, incorrect, or nonsensical information while presenting it as fact.
Black BoxDescribes the complex nature of LLMs, where the internal workings and reasoning for a specific output are difficult to interpret.
TransformerThe specific neural network architecture that forms the basis of all modern LLMs.
TokenThe basic unit of data an LLM processes (a word, part of a word, or punctuation mark).
Attention MechanismA component within the Transformer that allows the model to weigh the relevance of different tokens in the input when generating the next one.
Training DataThe vast dataset (e.g., books, articles, web pages) used to teach the model its language skills and knowledge.
ParameterA numerical value (weight or bias) within the model that is learned during training; often used to measure model size (e.g., billions of parameters).
InferenceThe process of using a trained model to generate an output or prediction based on a new input prompt.
Context WindowThe maximum amount of text (measured in tokens) the model can process and "remember" at any given time during a conversation.
Fine-tuningTraining an already pre-trained model on a smaller, specific dataset to improve performance on a narrow task.
In-Context LearningThe model learning a task simply by reading examples provided directly within the prompt, without requiring formal training.
AlignmentThe field dedicated to ensuring LLMs operate safely, ethically, and in accordance with human instructions and values.
GuardrailsSystem-level or model-level safety mechanisms designed to prevent the LLM from generating unsafe, harmful, or prohibited content.
TemperatureA setting that controls the randomness or creativity of the LLM's output. Higher values lead to more diverse, less predictable results.
Zero-shot LearningThe model successfully performing a task based on a prompt, without having seen any examples of that task beforehand.
Chain-of-Thought (CoT)A prompting technique where the user instructs the model to "think step-by-step" to arrive at a better, more accurate conclusion.
Vector EmbeddingsNumerical representations of words, phrases, or documents that capture their semantic meaning and relationship to other concepts.
Retrieval-Augmented Generation (RAG)A method where the LLM queries an external, up-to-date database for specific information to ground its response, reducing hallucinations.
Multi-modalityThe model's ability to process and understand multiple types of input (like text, images, and audio) and generate outputs across those types.
APIApplication Programming Interface—the gateway used by developers to send prompts to and receive responses from the LLM programmatically.
Prompt EngineeringThe systematic discipline of designing effective prompts to guide the LLM toward desired, high-quality, and reliable output.