About Journal
Aarhat Multidisciplinary International Education Research Journal (AMIERJ) is an official journal of Multidisciplinary Scholarly Research Association, India running Association with Aarhat Publication and Aarhat Journals, India. It is an open-access, Refereed, Peer Reviewed online qualitative journal. It publishes original, Refereed, Qualitative, Quantitative scientific outputs. It neither accepts nor commissions third party content.
Aarhat Multidisciplinary International Education Research Journal (AMIERJ) recognised internationally as the leading peer-reviewed Refereed Multidisciplinary journal devoted to Qualitative & Quantitative publication of original papers. www.aarhat.com/amierj accepts multidisciplinary papers with topics such as:
All Fields of Social Sciences, Arts, and Humanities ,Science, Management, Engineering, Library and Information Sciences ,Archaeology, Education, Law, Economics, Accounting, Finance, Human Resource Management, Marketing, Architecture, Epigraphy, History of science, sociology, psychology, Morphology, Museology, Papyrology, Philology, Preparation/conservation, Religion, Underwater archaeology, English Literature, Geography, Mathematics etc
Aarhat Multidisciplinary International Education Research Journal (AMIERJ) is now published in English as well as in Hindi & Marathi and it is open for submission by authors from all over the world. It is currently published 6 times a year, in Feb, April, June, August, October, and December.
Recently Published Articles
Original Research Article
|
Feb. 28, 2026
109 Downloads
ARTIFICIAL GENERAL INTELLIGENCE (AGI): MYTH, REALITY AND FUTURE PROSPECTS
Asst. Prof. Swapna Ramesh Merugu
DOI : 10.5281/amierj.18610168
Abstract
Certificate
Artificial General Intelligence (AGI) represents a pivotal yet elusive goal in artificial intelligence research, promising machines capable of human-like reasoning across diverse domains. This paper examines AGI through scholarly lenses, distinguishing conceptual myths from empirical realities, reviewing key literature, and analyzing methodological challenges. Drawing on peer-reviewed sources, it identifies research gaps in evaluation benchmarks and ethical frameworks while discussing practical implications for society. Findings suggest AGI remains theoretically feasible but distant, necessitating robust governance.[1][2]
Original Research Article
|
Feb. 28, 2026
78 Downloads
AN AUTOMATIC BOAT GUARD SYSTEM USING SENSOR-BASED MONITORING AND AI-ASSISTED SENSOR FAULT DETECTION
Needhumol Madhusoodanan Pillai
DOI : 10.5281/amierj.18608632
Abstract
Certificate
Marine transportation continues to face significant safety challenges due to boat overloading, unexpected sinking, fire accidents, and delayed emergency response, particularly in small and medium-sized vessels. Many existing safety measures rely heavily on manual monitoring and periodic inspection for such ferry or water taxi and workboats, which may not be sufficient in dynamic and unpredictable marine environments. With increasing dependence on electronic sensing systems for safety monitoring, sensor reliability has also become a critical concern, as unnoticed sensor failures can lead to incorrect safety decisions. This paper addresses the need for a reliable and real-time boat safety solution that can continuously monitor hazardous conditions while ensuring the dependability of the sensing infrastructure itself. By emphasizing automated monitoring, timely alerting, and early identification of sensor malfunctions, the proposed approach aims to reduce accident risks, improve response time, and support preventive maintenance. The work highlights the importance of intelligent and dependable safety systems in modern marine applications to enhance passenger safety and minimize loss of life and property.
Original Research Article
|
Feb. 28, 2026
93 Downloads
HUMAN MATHEMATICAL REASONING IN THE AI ERA : A COMPARATIVE ANALYSIS OF SKILL DEVELOPMENT AND COGNITIVE DEPENDENCY
Ms. Gauravi Raorane
DOI : 10.5281/amierj.18609637
Abstract
Certificate
Artificial Intelligence (AI) has become an integral component of contemporary mathematics education, offering tools that support problem-solving, feedback, and conceptual understanding. This study investigates the influence of AI on human mathematical reasoning, with particular emphasis on skill development and cognitive dependency. Data were collected from 173 undergraduate students using a structured questionnaire based on a five-point Likert scale. The study examines relationships between AI usage, mathematical reasoning ability, comparative reasoning performance, and cognitive dependency.
The findings reveal a strong positive correlation between AI usage and mathematical reasoning skills (r = 0.65), as well as comparative reasoning performance (r = 0.60), indicating that AI-assisted learning enhances accuracy, efficiency, and the ability to evaluate multiple solution strategies. However, results also show a moderate positive correlation between AI usage and cognitive dependency (r = 0.48), suggesting that excessive reliance on AI tools may reduce independent problem-solving and critical thinking. Survey responses further indicate that many students prefer manual problem-solving for better conceptual retention and deeper understanding.
The study concludes that AI is most effective when used as a supportive learning aid rather than a replacement for human reasoning. Balanced and guided integration of AI can enhance mathematical learning outcomes while preserving essential cognitive and analytical skills.
Original Research Article
|
Feb. 28, 2026
244 Downloads
CYBER INTELLIGENCE AIDS A NEW LAYER OF DEFENSE
Dr. Divya Premchandran
DOI : 10.5281/amierj.18608144
Abstract
Certificate
Cybercrimes have relatively increased in recent years and it is fast evolving using artificial intelligence playing a key role in this exponential growth. The impact of AI on cybersecurity is having two folds: One hand Cyber criminals are using AI to conduct more sophisticated cyber-attacks. On the other hand, it is helping to build a strong cyber defense mechanism. Enabling predicting threats from possible attackers with greater speed and precision than ever before. Artificial Intelligence enables cyber criminals and hackers to exploit vulnerabilities more effectively to avoid detection, execute more sophisticated attacks and scale their operations. Artificial Intelligence in social engineering had made a significant increase in psychological manipulation and deception to obtain sensitive information or assets from their targets. Even though using AI driven cyber threats has increased, AI still plays a crucial role for improving cyber security significantly. Advanced machine learning powers for threat hunting and AI technologies can help to detect and respond to threats with greater accuracy and speed than traditional measures. In this paper given a brief overview on various cyber intelligence aids where AI is integrated for threat intelligence using machine learning to identify and predict malicious threats. This shifts the network from security posture from reactive to preemptive.
Original Research Article
|
Feb. 28, 2026
86 Downloads
AI AS A SUPPORT TOOL FOR TRAFFIC WARDENS: SURVEY EVIDENCE ON FAIRNESS, PRIVACY AND DISPUTE REDUCTION
Sambhav Gosar
DOI : 10.5281/amierj.18638040
Abstract
Certificate
India’s traffic challan system relies heavily on traffic wardens who issue fines on the spot. While this human-driven process allows flexibility, it often suffers from errors. Drivers may be fined due to misjudgement, incomplete evidence, or bias, while genuine violations sometimes go unnoticed in crowded or complex traffic situations. These mistakes frustrate citizens, waste administrative effort, and weaken trust in enforcement.
This paper explores how AI can support traffic wardens in making fairer and more accurate decisions. Instead of replacing wardens, AI tools can act as assistants: mobile apps that verify license plate details instantly, machine learning models that flag likely violations based on context, and decision-support systems that help wardens distinguish between genuine offenses and unavoidable actions (such as stopping briefly to avoid an accident). By reducing false positives and strengthening true violation detection, AI can make manual enforcement more transparent and trustworthy.
The vision is a hybrid system where human judgment is enhanced and not replaced by AI, leading to smarter enforcement and stronger public confidence in traffic governance.
Original Research Article
|
Feb. 28, 2026
123 Downloads
OPTIMIZING INITIAL INTAKE: A COMPARATIVE STUDY OF AI-DRIVEN ASSESSMENT VS. TRADITIONAL HUMAN-LED SCREENING IN OUTPATIENT COUNSELING
Asst. Prof. Sudhendu Kashikar
DOI : 10.5281/amierj.18642145
Abstract
Certificate
As global mental health systems face an unprecedented surge in demand, the traditional intake process has become a significant bottleneck, often delaying critical care for weeks or months. This study explores the efficacy of Artificial Intelligence (AI) as a frontline tool for preliminary psychological screening, comparing its diagnostic precision and patient-reported outcomes against traditional human-led clinical interviews. In a controlled experimental setting, we recruited N = 120 adult participants seeking outpatient services. These participants were randomly assigned to either an AI-led intake cohort (using a fine-tuned Natural Language Processing model) or a control group led by Licensed Master Social Workers (LMSWs).
Our primary metrics included diagnostic congruence with a "gold standard" independent evaluation, the speed of symptom disclosure, and the quality of the working alliance. The findings indicate a paradoxical "Disinhibitory Effect": participants in the AI cohort demonstrated an 88% diagnostic alignment with independent supervisors, statistically surpassing the human-led group’s 82%. Crucially, the AI system elicited disclosures of "sensitive" clinical data—including substance abuse and suicidal ideation—significantly earlier in the interaction. While the AI group reported lower scores on the Working Alliance Inventory (WAI) regarding empathy, the data suggests that the perceived anonymity of the machine reduces social desirability bias and impression management. This study concludes that AI-driven intake tools offer a robust, scalable solution for clinical triaging. By standardizing the data collection phase, these systems allow human clinicians to focus their expertise on high-level therapeutic intervention, effectively bridging the gap between clinical efficiency and human-centered care.
Original Research Article
|
Feb. 28, 2026
253 Downloads
A COMPARATIVE REVIEW OF HALLUCINATIONS IN LARGE LANGUAGE MODELS AND HUMAN PERCEPTIONS OF BIAS
Muzammil Mehboob Khan
DOI : 10.5281/amierj.18637894
Abstract
Certificate
Large Language Models (LLMs) have become integral to a wide range of applications, raising concerns about their tendency to generate hallucinated content and exhibit biases inherited from training data. While prior research has examined hallucination behavior across different AI models, less attention has been given to how these limitations align with human perceptions of bias and trust.
This paper presents a comparative review of existing research on hallucinations in contemporary LLMs, synthesizing findings across multiple studies to identify common trends, evaluation approaches, and reported limitations. In parallel, a human perception study examines how users interpret and judge bias, reliability, and trustworthiness in AI-generated outputs. Participants provide subjective assessments of perceived bias and confidence in model responses, enabling comparison with conclusions drawn in prior technical literature.
The findings reveal a clear divergence between empirically reported hallucination behavior and user perception. Models identified as having lower hallucination tendencies are not consistently perceived as less biased or more trustworthy. Instead, fluent and confident responses often lead to higher perceived reliability, regardless of documented limitations. This highlights a disconnect between technical evaluation and human judgment.
This study emphasizes integrating human-centered perspectives into LLM evaluation and underscores the need for transparency, clearer communication of limitations, and trust-aware deployment.