Ethical Integration of AI in Education: Challenges and Opportunities in the Nigerian Context

Ohaeri, Nneka Cynthia1 and Okoro, Grace Ugomma2

1&2Nigerian Educational Research and Development Council, Sheda, Abuja

Abstract

The integration of Artificial Intelligence (AI) into educational systems presents transformative opportunities to enhance teaching, learning, and administrative processes. In Nigeria, where educational inequalities and resource limitations persist, AI holds promise for bridging gaps in access, personalising learning, and improving decision-making. However, the deployment of AI technologies raises critical ethical concerns, particularly around fairness, transparency, and accountability. Without clear frameworks, there is a risk that AI systems could reinforce existing biases, marginalise vulnerable learners, or operate in ways that are opaque and unaccountable. This paper explores how ethical principles can guide the responsible implementation of AI in Nigeria’s education sector. It identifies key challenges, including data privacy issues, lack of regulatory oversight, and insufficient local capacity to evaluate AI tools. The paper also analyzes gaps in current education and digital policies, emphasizing the need for inclusive stakeholder engagement. A proposed framework is presented to ensure that AI deployment promotes equity and builds public trust. Recommendations include the development of national ethical AI guidelines tailored to the education context, mandatory training for educators and developers, and regular, context-sensitive audits of AI systems. These steps are essential to harness AI’s benefits while safeguarding against its potential harms in Nigerian education.

Keywords: Artificial intelligence, Ethics, Digital Governance, AI policy, Trustworthy AI

About Author

Nneka Cynthia Ohaeri PhD is a Principal Research Officer at the Nigerian Educational Research and Development Council (NERDC), where she contributes to curriculum development, educational research, and policy-driven initiatives aimed at improving learning outcomes in Nigeria. She is deeply passionate about advancing functional, quality, and equitable education across Africa, with particular interest in early childhood education, gender-responsive pedagogy, and school-based gender-based violence prevention. Nneka has played key roles in developing curriculum content and teacher training materials, and she regularly supports capacity-building programmes for educators. She has published papers in reputable national and international journals. Her research interests include early childhood education, ethical and responsible use of artificial intelligence in education, gender equity, and evidence-based education policy. Through her work, she remains committed to leveraging research for inclusive national development.

Grace Ugomma Okoro is the Head of Legal at the Nigerian Educational Research and Development Council (NERDC), where she leads legal advisory, drives policy formulation, and ensures regulatory compliance within Nigeria’s education sector. She holds an LLB, BL, and LLM, and began her career in a reputable law firm, gaining strong experience in litigation and legal advisory. Her academic and professional interests lie in Alternative Dispute Resolution, medical law, family law, and social justice issues, particularly as they relate to education and vulnerable populations. She has published articles in reputable journals and continues to contribute to legal scholarship. With a passion for leveraging the law as a tool for development, she advocates for equitable access to quality education and the protection of rights within the learning environment. Her work reflects a deep commitment to using her legal expertise to influence policy, strengthen institutions, and drive meaningful change in Nigeria’s educational landscape.

Impact Statement

This research ‘Ethical Integration of AI in Education: Challenges and Opportunities in the Nigerian Context’ contributes theoretically by extending global ethical AI principles into a context-specific framework for Nigerian education, highlighting how fairness, transparency, and accountability must be interpreted within low-resource and inequality-prone systems. It enriches existing scholarship by demonstrating that ethical AI is not only a technical issue but a socio-educational concern shaped by policy gaps, institutional capacity, and cultural realities. Practically, the study offers a roadmap for policymakers, school administrators, and EdTech developers to implement AI responsibly through tailored guidelines, educator training, and routine ethical audits. These recommendations can improve trust in AI-enabled tools, reduce algorithmic bias, and protect learners’ data. Ultimately, the research supports evidence-informed decision-making and promotes equitable access to AI-driven innovations in Nigeria’s education sector. 

Citation

APA 7th Style

Ohaeri, N. C., & Okoro, G. U. (2026). Ethical integration of AI in education: Challenges and opportunities in the Nigerian context. Shodh Sari-An International Multidisciplinary Journal, 5(01), 127–137. https://doi.org/10.59231/SARI7894

Chicago 17th Style

Ohaeri, Nneka Cynthia, and Grace Ugomma Okoro. “Ethical Integration of AI in Education: Challenges and Opportunities in the Nigerian Context.” Shodh Sari-An International Multidisciplinary Journal 5, no. 1 (2026): 127–137. https://doi.org/10.59231/SARI7894.

MLA 9th Style

Ohaeri, Nneka Cynthia and Okoro, Grace Ugomma. “Ethical Integration of AI in Education: Challenges and Opportunities in the Nigerian Context.” Shodh Sari-An International Multidisciplinary Journal, vol. 5, no. 1, 2026, pp. 127-137, https://doi.org/10.59231/SARI7894.

Introduction

Artificial Intelligence (AI) is rapidly transforming the global education landscape 

by automating administrative tasks, personalising learning, and enabling data-driven decision-making. AI can be viewed as a tool that processes and evaluates large amounts of data to make forecasts, spot trends, and automate certain tasks (Henari & Ahmed, 2024). Some of the recent studies show using AI completes more tasks faster, and produces higher-quality outputs compared to when AI is not used (Dell’Acqua et al., 2023). AI reduces the teacher’s workload to allow for more focus on learner centred experience. According to Larysa and ksana (2024), AI has the capability to offer personalized training, empower teachers, assist impaired students with AI powered language translation, and create immersive and engaged learning for learners among others. 

AI applications in education include adaptive learning platforms, predictive analytics for student performance, chatbots for tutoring, and intelligent resource recommendation systems. These innovations hold promise for addressing systemic challenges such as teacher shortages, inconsistent curricula delivery, and limited access to quality resources. However, their use also raises ethical dilemmas. Algorithms may unknowingly discriminate, amplify societal biases, or make unclear decisions that are difficult for educators to explain or challenge. The implementation of AI must therefore be critically assessed within the Nigerian educational context to prevent harm and maximize benefit. In Nigeria, as efforts to digitalize education intensify, AI offers opportunities to improve access and efficiency across the educational spectrum. However, without robust ethical frameworks, AI tools risk reinforcing educational inequities, violating student privacy, and reducing transparency in educational decision-making

Core Ethical Principles in AI for Education

Ethics is broadly defined as the study of right and wrong behavior. This provides the guiding principles that govern how we act and make decisions. In technology especially AI, ethical considerations help ensure that innovations serve human well‑being, uphold fairness, and respect fundamental rights.  This was echoed by the “Ethical Rules for New-generation Artificial Intelligence” (2021), which emphasized that the use of AI must contribute to maximizing human well-being. It is important to identify ethical principles to follow as we deploy AI in education, so as to curb the dangers it poses to humanity (UNESCO, 2022). AI should be seen as a tool that supports humans and not destroy them. 

With AI’s ability to analyze data at scale, adapt instruction, and influence student pathways, it has become imperative to have ethical reflections about its use.  Global AI ethics frameworks such as UNESCO’s AI Recommendation (2021), European Commission (2019) and IEEE (2019) emphasize several core principles for achieving trustworthy AI.  The following are fundamental for educational AI:

1.  Human Agency and Oversight

AI in education must function as an assistant or copilot rather than an autonomous authority, preserving the teacher’s central role in shaping learning. Teachers need interfaces that allow them to review the system’s reasoning, provide feedback, and override suggestions when context calls for it (European Commission, 2019).  According to Luckin, Holmes, Griffiths, and Forcier (2016), educators bring nuanced understandings of learner motivation, socio‑emotional factors, and classroom dynamics that no algorithm can fully capture. Retaining final decision‑making authority ensures that AI insights are integrated into lesson plans in ways that respect school’s curriculum goals, cultural context, and students’ unique needs.  

To exercise effective oversight, teachers must be trained not only in how to use AI tools but also in when to question their outputs. Luckin et al (2016) suggests that schools ensure that human empathy, professional ethics, and institutional values guide every decision affecting a learner’s trajectory. This protects against inappropriate reliance on imperfect models and reinforces trust among students, parents, and educators. 

2.  Technical Robustness and Safety

AI in classrooms must be reliable and secure. Systems should be thoroughly tested to ensure correct operation and protected against failures or attacks (e.g. adversarial examples). While specifics of robustness depend on technology, the IEEE and EU guidelines underscore that trustworthy AI must meet performance and safety standards (e.g. avoiding malfunctions that could mislead students). For instance, an automated exam grader must be rigorously validated so it does not falsely assess student work. Technical robustness also includes resilience (system must work under varied conditions) and clear contingency plans if the AI fails. By designing and auditing AI tools to be safe and dependable, educators can prevent harm (e.g. mis-assessment or exposure to inappropriate content) and maintain confidence in AI systems.

3.  Privacy and Data Protection

AI systems in education handle deeply personal information learning records, behavioral patterns, even socioeconomic indicators and misuse or misinterpretation of that data can harm students’ privacy and dignity. A comprehensive privacy and data governance framework should include 

  1. Data Minimization & Purpose Limitation: Only collect information directly necessary to support learning objectives or system functionality. For example, if an AI tutor adapts reading level, it doesn’t need to store students’ home addresses or socioeconomic details (UNESCO, 2021).

  2. Informed Consent & Transparency: Before any data collection, provide clear, age-appropriate notices explaining ‘what data will be collected’ ‘why they are needed’ and ‘who will access them (UNESCO, 2021).

  3. Role-Based Access Controls: Implement strict permissions so that only the people who need to can access the data. For example, a teacher can only view and access the data of their own class. Also to use multi-factor authentication and periodic access reviews to prevent privilege creep.

  4. Data Retention & Deletion Policies: Define explicit timelines for how long each category of data is kept. For instance, academic results can be retained for one academic year plus graduation, then archived or anonymized.

  5. Third-Party Management & Data Sharing Agreements: Whenever student data are shared with external vendors such as analytics platforms or content providers or for research purposes, establish data processing agreements that specify permitted uses, security standards, and breach notification timelines and ensure compliance with local data‐protection laws. In Nigeria, the new Data Protection Act (2023) provides a legal foundation: it requires that personal data (including student records) be processed lawfully and securely. 

  6. Audit Trails & Monitoring: Conduct regular compliance audits to verify that policies are followed

  7. Student Rights & Data Portability: UNESCO (2021) notes that it important to uphold learners’ rights to review their own data records.  

4.  Transparency 

Transparency entails making the functioning of AI systems understandable to users, especially teachers and school administrators (Floridi, 2019). The “black box” nature of many AI models creates challenges in interpreting how decisions are made, such as grading, admission selection, or learning pathway recommendations. Transparency can be promoted by requiring AI vendors to disclose their decision logic, developing explainable AI tools with human-readable outputs and engaging educators in the design and evaluation of AI systems.

5.   Diversity, Non‑Discrimination & Fairness

Fairness refers to the unbiased and equitable treatment of all learners in AI systems, regardless of their demographic or socioeconomic background. AI must treat all learners equitably, regardless of gender, ethnicity, socioeconomic background, location, or ability level (Holmes, Bialik, & Fadel, 2019). Yet, many AI tools are trained on datasets that lack diversity. Studies warn that many AI tools have Western biases (e.g. tutoring examples geared to Western contexts) that leave students elsewhere at a disadvantage. (Abbas, 2025). This can result in biased outputs that marginalize already disadvantaged groups such as learners in rural communities or those with disabilities. This can pose a great challenge for a country as diversified as Nigeria. 

To promote fairness, algorithms should be audited for bias before implementation, AI systems must be regularly evaluated for contrasting impacts, ethicists and educators should be involved during the development of AI tools and data collection should represent the diversity of Nigerian learners.

6.      Societal & Environmental Well-Being

Beyond individual classrooms, AI in education should contribute positively to broader societal goals such as closing achievement gaps, supporting inclusion of learners with disabilities, and minimizing environmental footprint (IEEE, 2019).

7.   Accountability and Responsibility 

Accountability ensures that human actors, developers, policymakers, and educators remain responsible for AI outcomes. Without accountability mechanisms, harm caused by AI systems may go unaddressed or result in impunity.

To uphold accountability, clear roles and responsibilities must be defined, complaint and redress mechanisms should be established and Legal frameworks should mandate ethical compliance in AI use (European Commission, 2019; IEEE, 2019.

There should be responsibility in the usage of AI. To ensure accountability for AI systems and their effects, appropriate oversight, impact assessment, and due diligence evaluation should be developed (Huang, 2023). In the midst of the present surge of expedited advancement in the field of artificial intelligence, the technology is progressively being assimilated and implemented across various domains. The emergence of machines that resemble humans has presented a novel challenge in determining accountability (Zhang, 2022). Despite the many benefits of AI in education, there should be accountability system without which issues such as the leakage of private information, asymmetric power of knowledge, covert operations, and algorithmic infringement are inevitable (Guo, 2021). 

Challenges to Ethical AI Integration in Nigeria

In Nigeria, several structural barriers complicate the ethical integration of AI in education:

Limited Digital Infrastructure: The lack of the necessary infrastructure, like reliable source of electricity, internet connectivity, and other digital equipment that support the AI services hampers equitable access to AI technologies. Persistent power outages, aging transmission lines, and uneven internet coverage mean that many schools, especially in rural areas, cannot reliably access AI‑driven tools. For instance, Nigeria’s national grid collapses frequently due to under‑investment and vandalism of substations, distributing only about one‑third of its generation capacity, with losses estimated at $29 billion annually (Anyaogu, 2024). Concurrently, educators report that unstable electricity and prohibitively expensive data plans leave underserved communities unable to engage with digital learning platforms (Ajala, 2025). 

Low Data Literacy Among Educators and Policymakers:  Closely related is the low level of digital and data literacy among educators and students. UNICEF notes that 78% of youth lack digital literacy skills (UNICEF, 2025).  Teachers may be unfamiliar with even basic software, let alone specialized AI tools. This literacy gap raises two issues: first, students from disadvantaged schools may not benefit equally, undermining fairness. Second, without understanding how AI works, neither teachers nor parents can give informed consent or critically assess AI recommendations. For example, a teacher who does not know how an AI grading tool analyzes work cannot spot if it is biased. Building capacity through training is thus essential for ethical use. Otherwise, AI could become a “black box,” and users might blindly trust outputs that are incorrect or unfair.

Imported AI Solutions and Contextual Misalignment: Much of the current AI technology is developed outside Nigeria, often tailored to Western educational contexts. Cultural and curricular differences mean that imported AI tools may not align with Nigerian needs. For instance, a language-learning app trained on American English might poorly support Nigerian English variations, or an example problem set might reference foreign cultural scenarios. As Abbas (2025) observes, that educational platforms powered by AI tend to prioritize Western curricula, leaving students in other regions without access to relevant or localized resources. This “one-size-fits-West” approach can widen educational gaps. Ethically deploying AI requires adapting or co-developing systems with local content.  Imported solutions may inadvertently encode biases or assumptions that do not apply in Nigeria, leading to unfair outcomes (for example, an AI career advisor might suggest paths irrelevant to local labor markets). Aligning AI with Nigeria’s cultural and educational context is therefore a significant challenge. While Rani (2024) emphasizes that Artificial Intelligence tools serve as a catalyst for personalized language learning and teaching, their ethical integration in the Nigerian context requires a balance between these technological opportunities and the socio-economic challenges of digital equity.

Resource and Policy Constraints: Nigeria lacks mature AI governance and standards. While NITDA is drafting a national AI strategy, there are currently no sector-specific regulations for AI in education. This means schools have little guidance on ethical implementation. Furthermore, funding and technical expertise are limited. Many schools cannot afford commercial AI products, and few local companies supply tailored solutions. These constraints can pressure schools to adopt low-cost foreign tools, exacerbating the misalignment issue. Finally, ethical oversight bodies (like data protection authorities) are still building capacity, so violations might go unaddressed. Together, these factors create an ecosystem where ensuring ethical AI use requires proactive policy work.

Opportunities and Policy Recommendations

Despite the challenges, several opportunities and strategies can promote ethical AI in Nigerian education:

Strengthen Institutional Leadership: National agencies must take the lead. NITDA has already initiated Nigeria’s first AI policy efforts: as of 2023 it completed a draft National AI Policy (NAIP) and is developing a comprehensive National AI Strategy emphasizing “ethical and responsible” AI use (White and Case, 2025). NITDA can extend this mandate to education by collaborating with the Ministry of Education. For instance, NITDA could issue guidelines or standards for AI tools in schools, ensuring adherence to privacy and fairness norms (leveraging lessons from the Nigeria Data Protection Act, 2023). Meanwhile, the National Education Research and Development Council (NERDC) should integrate AI and ethics into curricula. The current Basic Education Curriculum Framework provides an opening to include digital literacy and AI concepts. By embedding AI-related competencies (from UNESCO’s AI competency frameworks for students/teachers) early in schooling, Nigeria builds a foundation for informed use. NITDA NCCE and NERDC can also jointly support teacher training programs to raise AI awareness among educators.

Invest in Infrastructure and Access: To overcome limited connectivity, public-private partnerships should expand internet and power access for schools. Nigeria’s UNICEF/Generation Unlimited programme offers a model: by partnering with telecom providers, they recently connected over 1,000 rural schools to the internet and distributed thousands of learning devices (UNICEF, 2025). Continuing and scaling such initiatives will enable AI deployment. Policy recommendations include prioritizing school internet connectivity in national ICT plans and subsidizing data costs for educational services. Equally, initiatives to improve electricity (e.g. solar panels for schools) will make it feasible to use AI tools regularly.

Embed Ethics Early and Continuously: Ethics should be woven into AI education from the start. Experts like Dureke (2025) propose “introducing AI literacy and ethics from the primary school level” so that every learner grows up aware of AI’s benefits and risks. Similarly, teacher education programs should include ethics modules. Nigeria can adopt UNESCO’s approach of translating ethical principles into educational policy action. For example, school technology policies should mandate transparent AI: students should be told when an AI tool is used and how it affects their learning. Embedding ethics means not just teaching technical skills, but also fostering values like empathy, fairness, and digital citizenship alongside AI use. This holistic approach “protects our values, ensures inclusion, and builds a society where innovation benefits everyone,” as Durueke (2025) emphasizes.

Promote Inclusive Local Innovation: Nigeria can leverage its large and creative youth population to develop context-appropriate AI solutions. The proposal to form a multi-sectoral “AI Education Task Force” (including NITDA, curriculum bodies, universities, and civil society) is a good model. Such a body could solicit input on what types of AI tools are needed – for example, content in local languages or analytics tuned to Nigeria’s curriculum. Encouraging local start-ups and university research to focus on Nigerian education challenges will reduce reliance on unsuitable imports. Funding schemes and hackathons (aligned with ethical guidelines) could spur creation of inclusive AI platforms, from exam prep apps to sign-language tutors. Nigeria’s commitment to international AI ethics (e.g. signing the 2023 Bletchley Declaration) should translate into domestic support for African-oriented AI research.

Enact and Enforce Relevant Policies: In addition to broad AI strategy, specific policies can reinforce ethics. For instance, education authorities could require any AI-based education service to meet certain standards (e.g. obtain an ethics certification, similar to how the Nigerian Universities Commission approves universities’ curricula). Nigeria’s Cybercrimes Act and Data Protection Act provide a starting legal framework; extending these to cover AI in education (for instance, rules on automated decision-making about students) would add safeguards. Regular audits of AI systems in schools and clear channels for reporting concerns will enhance accountability. Policy dialogues should involve teachers, parents, and student representatives to ensure policies reflect community values.

Conclusion
AI has the potential to revolutionize education in Nigeria, but without ethical foresight, it can deepen divides and erode trust. Fairness, transparency, and accountability must guide all stages of AI implementation to ensure it serves the public good. As Nigeria transitions towards greater digital integration, embedding ethical AI practices will be critical for sustainable, inclusive, and just educational reform.
Statements and Declarations

Peer-Review Method: This article underwent a rigorous double-blind peer-review process conducted by two independent external experts in the fields of Educational Technology and Ethics. This ensures that the research meets the highest academic standards for global indexing.

Competing Interests: The authors (Nneka Cynthia Ohaeri and Grace Ugomma Okoro) declare that they have no financial or personal relationships that could have inappropriately influenced or biased the findings, analysis, or conclusions presented in this study.

Funding: This research was conducted independently as part of the authors’ professional scholarly activities at the Nigerian Educational Research and Development Council (NERDC). No specific external grants or commercial funding were received for this work.

Data Availability: The data supporting the analysis of AI integration in the Nigerian educational context are derived from current policy reviews, academic literature, and institutional reports. The qualitative framework used for this study is available from the corresponding author upon reasonable request.

Licence: Ethical Integration of AI in Education: Challenges and Opportunities in the Nigerian Context © 2026 by Ohaeri, N. C. and Okoro, G. U. is licensed under CC BY-NC-ND 4.0. This work is published by the International Council for Education Research and Training (ICERT).

Ethics Approval: As this study involves a secondary analysis of educational policies and ethical frameworks and does not involve direct experimentation on human or animal subjects, it was deemed exempt from formal ethical review by the Internal Research Committee of the Nigerian Educational Research and Development Council (NERDC), Sheda, Abuja.

References
  1. Abbas, A. (2025, January 25). Western bias in AI: Why global perspectives are missing. Unite AI. https://www.unite.ai/western-bias-in-ai-why-global-perspectives-are-missing/

  2. Ajala, S. (2025, March 2). What a poor infrastructure, skill gap hinders AI, technology in Nigeria. Business Day. https://businessday.ng/interview/article/how-poor-infrastructure-skill-gap-hinder-al-technology-education-in-nigeria-ajala

  3. Anyaogu, I. (2024, December 11). Explainer: Why Nigeria’s power grid is failing. Reuters. https://www.reuters.com/world/africa/why-nigerias-power-grid-is-failing-2024-12-11

  4. Dell’Acqua, F., McFowland III, E., Mollick, E., Lifshitz-Assaf, H., Kellogg, K. C., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. R. (2023). Navigating the jagged technological. Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality [Working paper].

  5. Dr. Dureke urges FG to embed AI education in curriculum. (2025, April 25). Vanguard News. https://www.vanguardngr.com/2025/04/dr-dureke-urges-fg-to-embed-ai-education-in-curriculum

  6. European Commission. (2019). Ethics guidelines for trustworthy AI. In High-level expert group on artificial intelligence. Luxembourg city, Luxembourg. https://doi.org/10.2759/346720

  7. Floridi, L. (2019). Establishing the rules for building trutstworthy AI. Nature Machine Intelligence, 1(6), 261–262. https://doi.org/10.1038/s42256-019-0055-y

  8. Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1), 1–13. https://doi.org/10.1162/99608f92.8cd550d1

  9. Guo, Y. (2021). The cultural implications and ethical risks of algorithm society. Chinese Book Review, 2021(9), 45–53. https://doi.org/10.3969/j.issn.1002-235X.2021.09.006

  10. Huang, L. (2023). Ethics of artificial intelligence in education: Student privacy and data protection. Science Insights Education Frontiers, 16(2), 2577–2587. https://doi.org/10.15354/sief.23.re202

  11. IEEE global initiative on ethics of autonomous and intelligent systems. (2019). Ethically aligned design (version 2): A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. IEEE.

  12. Luckin, R., Holmes, W., Griffiths, M., & Forcier. (2016). Intelligence unleashed: An argument for AI in education. Pearson Education.

  13. Nigeria Data Protection Commission. (2023). Nigeria data protection act. FRN.

  14. Sytnyk, L., & Podlinyayeva, O. (2024). AI in education: Main possibilities and challenges. InterConf (May 19–20, 2024). Brighton, UK, (45(201)), 569–579. https://doi.org/10.51582/interconf.19-20.05.2024.058

  15. United Nations Educational, Scientific and Cultural Organization. (2021a). Recommendation on the ethics of artificial intelligence. UNESCO.

  16. United Nations Educational, Scientific and Cultural Organization. (2021b). The Open University of China awarded UNESCO Prize for its use of AI to empower rural learners. https://en.unesco.org/news/open-university-chinaawarded-unesco-prize-its-use-ai-empower-rural-learners

  17. White, & Case. (2025, January 27). AI watch: Global regulatory tracker—Nigeria. White and Case. https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-nigeria#

  18. Zhang, Q. (2022). Transformation of modes of production in the age of artificial intelligence: An analysis based on the phenomenon of “human-like machines.” Journal of Socialist Theory Guide, 2022(9), 72–79.

  19. Rani, B. T. (2024). Artificial Intelligence tools in Learning English language and Teaching: How can be AI used for Language Learning. Edumania-An International Multidisciplinary Journal, 2(4), 230–234. https://doi.org/10.59231/edumania/9085

Scroll to Top