Artificial Intelligence (AI) has emerged as a transformative force, reshaping the landscape of
Criminal Investigation processes across various industries. John McCarthy, the pioneer of
Artificial Intelligence, was the first person to formally define AI in 1956. He described it as
“The science and engineering of creating intelligent machines, particularly intelligent
computer programs.” Artificial intelligence refers to a method or approach for creating
computer systems that can think and reason in a manner similar to the human mind. It
involves the development of controlled robots and intelligent software. The developing power
of computer systems and the curiosity of the human brain led him to speculate, “Can a
machine think and behave like humans do?” While AI promises numerous benefits, ethical
considerations loom large as these technologies become integral to decision-making. Issues
such as transparency, fairness, accountability, and privacy demand careful attention.
Role of AI applications in the criminal justice system
AI algorithms can be applied in various domains, including the criminal Investigation system,
to enhance public safety. AI algorithms are utilized in the analysis of radiological images for
medical purposes, particularly in determining the cause and manner of death. This has
significant implications in the fields of criminal justice and medical auditing. Furthermore, it
has delved into various fields within forensic science, including DNA analysis. Artificial
Intelligence (AI) is quickly emerging as a pivotal technology in the field of deception
detection.
Facial Recognition: AI plays a crucial role in facial recognition, which is considered a
significant application of this technology. It is ubiquitous in both the public and private
sectors. Intelligence analysts primarily depend on facial recognition images to identify
individuals involved in criminal activities. Examining a large quantity of potentially
correlated images and videos in a precise manner is a laborious and timeconsuming task,
susceptible to human error caused by fatigue and other factors. The term “distinct hu” refers
to the fact that machines or computers do not experience fatigue. The Intelligence Advanced
Research Projects Activity’s Janus computer-vision project is currently conducting trials to
explore how to distinguish between individuals based on their facial features, similar to how
a human analyst would.
Public Security with AI Cameras: Video and picture analysis are employed in the criminal
justice system and by law enforcement agencies to gather specific information or knowledge
about individuals, objects, and activities in order to support criminal investigations. However,
the examination and interpretation of video and image data require a substantial investment in
skilled personnel with expertise in the subject matter, making it a highly labor-intensive task.
Human error is also prevalent due to the overwhelming amount of information, the rapid
advancements in technologies such as smartphones and operating systems, and the limited
number of personnel with the expertise to handle such information. AI technologies possess
the capability to surpass human errors and function at an expert level. The American
Transportation Department is currently exploring ways to enhance public safety by testing an
automated traffic accident detection system that relies on recording data. offenders are
capable of creating artificial intelligence systems originally designed to commit or facilitate a
crime.
Gunshot Detection Police can be present at a shooting scene without being summoned or
without any police watching the incident. What is the most effective strategy to do this? The
answer is yes, with the help of AI technology such as sensors can be installed in public
infrastructure which will be connected to a cloud-based computer capable of accurately
identifying and pinpointing gunshots. Each sensor records the timing and sound of gunfire.
This data from several sensors can help in the investigation of the incident. Sensors can also
help pinpoint the shooter’s location. The whole information, as well as the precise location of
the gunshot, is subsequently transmitted to police headquarters. Also, the data is shown as a
pop-up alert on a computer or mobile screen. According to studies, just 12% of shooting
incidents are reported to police. In such cases, using AI technology to identify gunshots and
notify police can help them to respond to a shooting event more swiftly (Hauck,2023)
AI and criminal Investigation in Pakistan:
Pakistan recognizes the importance of AI and has been attempting to develop it. The
Department of Robotics and Intelligent Machine Engineering (RIME) NUST, which was
founded in 2011, is one of the leading organizations involved in this field. It is the nation of
Pakistan’s first academic robotics and artificial intelligence endeavor. The RIME provides
graduate-level coursework and research in this area and adjacent fields (NUST, 2013).
Additionally, the National Centre for Artificial Intelligence (NCAI) at NUST is the most
recent technical project launched by the Pakistani government as of March 2018. A
ground-breaking artificial intelligence (AI) security control system, the first of its type in the
nation, was launched by the Khyber-Pakhtunkhwa Police in Peshawar. Notably, the system
now contains information about female militants, terrorists, and criminals in its database for
the first time. Other digital systems launched by the KP Police IT School include a criminal
record verification system (CRVS), an identity verification system (IVS), a vehicle
verification system (VVS), a one-click SOS alert service for educational and other vulnerable
institutions, the geo-tagging of crime scenes, and hot-spot policing (Nawab et al., 2019).
Additionally, if a suspected person enters the red zone, an alarm is raised, instantly informing
the closest police checkpoint. AI systems in time can transform the investigative process,
which now depends on obtaining CCTV video, which frequently causes delays and obstacles.
Medical Applications (Clinical) Bioprinting plays a crucial role in criminal
investigations by aiding suspect identification and connecting individuals to crimes through
biometric data such a fingerprints, DNA, behavior, voice, and gait. Various techniques like
fingerprint and DNA analysis, facial recognition, and iris identification are employed for
crime investigation(Abdelfattah, 2014 Ahmed & Osman, 2016). In the realm of cybercrime,
computers and information technology are essential for evidence collection, perpetrator
identification, and tracking crime trails.
Case Studies of Ethical Dilemmas in AI Driven Criminal Investigation procedure
Ethical challenges in AI decision-making have been vividly illustrated by real-world cases,
often involving high-profile incidents that shed light on the complex interplay between
technology and ethical considerations (Boshoff, et. al., 2019, Robinson, 2022, Vecchione,
Levy & Barocas, 2021). This review delves into notable examples, drawing lessons from
these cases and outlining implications for future AI deployments. One of the prominent
ethical dilemmas in AI involves the use of facial recognition technology. Instances where law
enforcement agencies deploy facial recognition systems, such as the controversy surrounding
Clearview AI, raise significant privacy concerns. The widespread use of facial recognition
without clear regulations has prompted debates on the balance between security and
individual privacy.
Human Rights and Dignity
The incorporation of artificial intelligence (AI) into criminal law raises significant ethical
concerns, namely with human rights and dignity. This part provides a thorough examination
of the ethical consequences that arise from the use of AI in criminal law. It emphasises the
importance of ensuring that technological progress is in line with fundamental human values.
Inherent Bias and Discrimination: Artificial intelligence systems, which depend on previous
data, have the potential to perpetuate pre existing prejudices and discrimination. Within the
realm of criminal law, this could lead to an imbalanced focus on specific demographic
groups, so violating the principles of equal treatment and non-discrimination. Automated
decision making procedures have the potential to undermine the rights of due process. The
lack of transparency in certain AI systems’ ability to offer clear justifications for its
conclusions raises issues regarding individuals’ capacity to contest and seek redress for
outcomes, potentially infringing upon the right to a fair trial.
AI in Pakistani courts of law
A judge in a Pakistani court recently employed GPT-4, Open AI’s most advanced chatbot, to
help render a judgment in a case. This decision sparked widespread debate regarding AI’s
capabilities and the possibility of it replacing legal professionals, including judges. This
article explores each aspect of the debate, as well as discussing the potential shortcomings
and detriments of AI in a court of law.The case of Muhammad Iqbal v Zayad in the Sessions
Court in Phalia, Punjab was a civil suit brought by the plaintiffs over a petrol-pump property
dispute. Judge Amir Munir dismissed their appeal for an injunction. The court used GPT-4 to
formulate the decision based on existing laws, finding that the chatbot’s suggestions were
consistent with Pakistani law, specifically the Code of Civil Procedure, 1908. The judgment
includes an explanation of how AI is shaping the future of legal decision-making, citing
countries such as the UAE and China which have already used AI in courtrooms. It further
reinforces the fact that the chatbot provides a logical explanation of relevant laws and
procedures and states that the difference between GPT-4’s and the judge’s answers is ‘only in
form and not in substance’. In his court order, Munir observed that “if judges develop
friendship with the chatbot programs like ChatGPT-4 or Google Bard, and put right questions
to it based on available data, facts and circumstances of a case”, it can help “reduce the
burden on the human judicial mind by providing relevant and reliable answers.”
CONCLUSION
When AI systems make decisions, determining who is ultimately responsible for those
decisions becomes a complex ethical and legal issue. Ethical frameworks must evolve
alongside technological advancements to foster a responsible and equitable integration of AI
in Criminal investigation. Besides, the concerns in the article concerning accountability, due
process and, particularly, data privacy cannot be overlooked. In case an AI system for any
reason makes a mistake, how would it be rectified and who would be held accountable? How
can we ensure that individuals are given a fair trial when decisions are made without human
input? These are critical questions that need to be addressed before we can confidently
introduce AI into the legal system.