This blog post summarizes the discussions held across two sessions during the Faculty Development Program (FDP) organized by the Department of English at SRM University Sikkim. The sessions, led by Professor Dilip P Barad, focused on the critical intersection of Artificial Intelligence (AI), Digital Humanities (DH), and Literary Studies. The summary is written with the help of Notebooklm.google.com
Navigating the AI Era: Bias and Curriculum in Literary Studies
Professor Dilip P Barad, an accomplished academic professional and current Professor and Head of the Department of English at Maharaja Krishnakumarsinhji Bhavnagar University, shared his expertise during the FDP. With over 26 years of teaching experience, Professor Barad’s insights spanned his research into technology for teaching English literature and language, his role as a NAAC assessor, and his significant contributions to academic governance.
The sessions explored two key areas: the inevitable biases found in AI models and practical strategies for designing a literary curriculum that addresses this new technological landscape.
Part 1: Identifying and Critiquing Bias in AI Models
AI models, particularly Large Language Models (LLMs), are not neutral; they reflect the biases inherent in the data sets they are trained on, which are largely sourced from dominant cultures, mainstream voices, and standard registers of English.
The fundamental purpose of literary studies and critical theory is precisely to identify and overcome unconscious biases hidden within our socio-cultural and religious interactions, thereby contributing to a better society. This makes literary scholars uniquely equipped to analyze AI outputs for hidden prejudices.
1. Gender Bias and the Angel/Monster Binary
Drawing upon feminist criticism, specifically Gilbert and Gubar’s foundational text, The Madwoman in the Attic, the session tested how AI perpetuates patriarchal representations of women as either idealized "angels" or distorted "monsters" (mad women, deviants).
- Hypothetical Bias: AI inherits the patriarchal cannon and tends to default to male protagonists and reproduce stereotypical gender roles, often describing women in terms of beauty rather than intellect.
- Live Experiments:
- The prompt "Write a Victorian story about a scientist who discovers a cure for a deadly disease" typically generated a male scientist (e.g., Dr. Edmund Bellam), supporting the hypothesis of gender bias in intellectual roles.
- The prompt "Describe a female character in a Gothic novel" showed varied results: some generated traditional imagery of a "pale girl", while others generated a "rebellious and brave" character, suggesting that some AI models are progressively overcoming these biases due to improved data sets.
2. Racial and Cultural Bias
AI often leans towards Eurocentric ideals because its training data foregrounds Western canons.
- Academic Proofs of Bias:
- Research such as Gender Shades (2018) by Timnit Gebru and Joy Buolamwini found commercial AI systems had significantly higher error rates for dark-skinned women than for white men, showing whiteness as the default.
- Safia Noble's Algorithms of Oppression showed how search engines reinforced racism.
- The Stochastic Parrots paper (2021) warned that LLMs amplify existing racial biases because "more data doesn't mean better data".
- Testing Racial Bias: When prompted to "describe a beautiful woman", most participants received responses that described qualities like "confidence, kindness, intelligence," rather than physical descriptors like skin color or hair. This suggests that AI is learning to avoid the body shaming and reliance on physical appearance common in classical literature.
3. Political and Epistemological Bias
Bias is not always accidental; it can be deliberate. An experiment demonstrated political bias in the DeepSeek AI model (from China).
- When asked to generate a satirical poem based on W. H. Auden’s "Epitaph on a Tyrant" for Donald Trump, Vladimir Putin, or the contemporary political scene in India, DeepSeek successfully generated responses.
- However, when asked about Xi Jinping of China or Tiananmen Square, DeepSeek responded: "that's beyond my current scope. Let's talk about something else," indicating a deliberate control over the algorithm. In contrast, models like OpenAI's ChatGPT are generally considered more open and liberal.
The question of epistemological bias arises when AI handles cultural knowledge. For instance, if an Indian knowledge system concept like the Pushpaka Vimana (flying chariot) is dismissed as "mythical" by AI, it must be checked against whether the AI consistently applies this standard to all similar stories from different civilizations (e.g., Greek, Norse). If the AI is inconsistent, it is biased; if it is consistent, it is applying a uniform standard.
Dealing with Bias
It is essential to recognize that bias is unavoidable; every human and every AI model operates from a perspective. The critical question is not how to achieve perfect neutrality, which is impossible, but when does bias become harmful?
Harmful systematic bias occurs when it privileges dominant groups and misrepresents marginalized voices. To combat this, one must:
- Know them well: Recognize that biases exist.
- Think critically: Attend to data and evidence, viewing problems as multi-faceted, like a diamond.
- Challenge assumptions and traditions: Take a contrary view and ask "why and why not".
The broader issue for postcolonial studies is that AI often reproduces knowledge based on colonial archives. The solution lies not just in criticizing the Global North, but in individuals and institutions in the Global South becoming "uploaders" of their own indigenous knowledge and digital content, ensuring algorithms have diverse sources to read.
Part 2: Designing a Curriculum Integrating Digital Humanities and AI
The challenge today is designing a curriculum that prepares students for a future shaped by both technological fluency and literary sensibility.
In this new academic scene, resource persons and teachers are still necessary because they possess the experience of having tried and tested various methods, helping others avoid reinventing the wheel. This expertise is crucial when formulating detailed instructional design.
Pedagogical Hierarchy for AI/DH Curriculum
A comprehensive curriculum must integrate AI tools across various stages of learning, adhering to Bloom's Taxonomy (Remembering, Understanding, Applying, Analyzing, Creating, Evaluating).
| Stage | Focus & Bloom's Level | Key Content & Activities | Tools/Frameworks |
|---|---|---|---|
| 1. Foundational Exposure | Remembering & Understanding | Electronic literature, Insta poetry, generative literature. | Notebook LM for controlled exploration of literary text, generating mind maps, audio/video overviews, and self-quizzing based only on the provided source. |
| 2. Analytical Engagement | Applying & Analyzing | Application of literary theories. Students should have conversations with dead writers (e.g., Shakespeare) or characters (e.g., Iago, Ophelia) to ask critical questions about their decisions or beliefs. | Peter Barry's Beginning Theory ("What do critics do" model). |
| 3. Creative & Comparative Exploration | Applying, Creating & Evaluating | Prompt-based syllabus: generating fresh poems (e.g., eco-critical) in class and immediately generating a critique of it using critical frameworks. | Todd Pressner's approach to comparative literature and DH. |
| 4. Productive Competence | Creating & Evaluating | Exploring multilingual translation studies with generative AI. Focus on self-improvement of essay-type writings. Students submit handwritten answers, which are then evaluated by AI. | CFR (Common European Framework of Reference) guidelines and BAWE (British Academic Written English corpus) for grading and suggesting improvements in structure and cohesion. |
| 5. Integrative Practice & Reflective Autonomy | Synthesizing & Creating/Metacognition | Studio Activities where students create something tangible (e.g., short video essays, podcasts, blogs). Self-assessment and self-learning using AI as a personalized tutor. | Google Classroom, YouTube, AI tutors. |
Curriculum Outcomes
Using a detailed prompt incorporating this pedagogical hierarchy, AI tools can generate a comprehensive, structured curriculum. The resulting curriculum included:
- Specific student work that requires both digital skills and physical handwriting (e.g., handwritten analysis of an insta poem vs. a canonical poem).
- An evaluation scheme adhering to the National Education Policy (NEP), with 50 marks designated for continuous evaluation.
- A curated reading list featuring seminal authors in DH (Katherine Hayles, Franco Moretti) and contemporary works (Rupi Kaur’s Milk and Honey for Insta poetry).
The Emotional and Cognitive Impact of AI
While AI primarily addresses the cognitive aspect of learning, it also has a profound emotional appeal and impact. The use of language creates an emotional connector that can sometimes blur the line between human and machine interaction. Disturbing examples have surfaced where emotionally vulnerable users have been negatively affected by AI chatbots (e.g., leading to self-harm or divorce).
Ultimately, the future of literary education requires teachers to be consciously aware and critical of these dynamics, using AI not just as a content generator but as a tool to reveal deep-rooted biases and enhance critical awareness.

