Highlights

Please wait a minute...
  • Select all
    |
  • Reviews
    SU Dongbin, DONG Linpei, ZHANG Yunfeng, ZHAO Peng, LI Kaikai
    Forensic Science and Technology. 2025, 50(2): 189-196. https://doi.org/10.16467/j.1008-3650.2024.0016

    Dried blood spot (DBS) is a sampling technique in which a small volume of blood is collected on a specific type of filter paper. Since the 1960s, DBS has been primarily used for the newborn screening of metabolic diseases. With the development of highly sensitive analytical instruments, the dried blood spot technique allows for accurate quantification of ethanol, stimulants, drugs of abuse and heavy metal elements in biological samples. Due to its obvious advantages, the application of DBS in forensic toxicology has witnessed significant growth in the past decade, which is highlighted and presented in this article. Some challenges and suggestions of dried blood spot applications were summarized for the further research as well.

  • Research Articles
    ZHU Ming, LUO Gang, FU Xiaoxin, WANG Nian, LU Xilong, ZHANG Yan
    Forensic Science and Technology. 2025, 50(2): 141-147. https://doi.org/10.16467/j.1008-3650.2024.0018

    Footprint features, as one of the biological features of the human body, play an important role in the field of personal identification. At present, most research on footprint recognition focuses on footprint images as experimental data, using deep learning algorithms as the foundation and relying on auxiliary algorithms to complete high-precision footprint recognition tasks. However, there is a problem with models built on footprint images. Due to the similarity of footprints of different people, as the number of samples increases, the differences between the features of footprints of different people will continue to decrease, leading to an increasing false detection rate of the model. In order to reduce the interference of similarity between footprints on model recognition ability, this paper takes dynamic footprints as the research object and proposes a dynamic footprint retrieval method based on multi-class feature fusion. The proposed method uses a spatio-temporal fusion module to integrate the spatio-temporal information of footprints, so that the footprint recognition method is not limited to the apparent information of footprints. Firstly, the convolutional neural network is used to extract the frame level features of dynamic footsteps, and then the feature fusion module calculates the complete apparent features of the fused dynamic footprints through a trainable weight matrix and frame level features. Secondly, the temporal aggregation branch of the spatio-temporal feature fusion module is used to extract long-term temporal features within frame level features, and then the long-term temporal features are fused with frame level features through orthogonal fusion calculation method to form spatio-temporal features. Finally, the visual features and spatio-temporal features are fused for dynamic footprint retrieval. A comparative experiment is conducted on a dynamic footprint dataset of 200 people with existing deep learning algorithms, and the experimental results shows that this method achieved better performance, with Rank1 and mAP being 85.39% and 55.28%, respectively.

  • Reviews
    WEI Zhibin, LI Xizhu, LI Hao, LIU Zhe, MENG Xiangchao, HUANG Lichuang, YANG Chaopeng, HE Guanglong
    Forensic Science and Technology. 2025, 50(2): 182-188. https://doi.org/10.16467/j.1008-3650.2024.0020

    Postmortem interval (PMI) refers to the interval between the discovery or examination of the body and the occurrence of death. Estimation of postmortem interval is one of the important research contents in forensic pathology,and it has always been the focus and hot spot of research work. Different techniques are used to evaluate and analyze the changes in the human body after death to estimate postmortem interval. The traditional methods of estimation of postmortem interval are based on postmortem phenomena such as algor mortis, rigor mortis, livor mortis, etc. These methods rely on the subjective experience of forensic pathologists, and the estimated time of death is a relatively wide range, and the estimated result is susceptible to subjective judgment. In recent years, postmortem computed tomography (PMCT) has become increasingly influential in the field of forensic pathology. Postmortem computed tomography is a non-invasive, rapid, and objective auxiliary means of autopsy, which can significantly improve the quality and efficiency of autopsy and can find imaging features that may not be observed in traditional autopsy. After the death of the human body, the body will undergo a series of postmortem changes according to a certain time law. A series of characteristic imaging changes in cadaver organs and tissues with the passage of time of death can be observed through the application of postmortem computed tomography, and these imaging changes can be quantitatively and objectively described, The potential ideal imaging indicators can be used to estimate postmortem interval and the correlation between these indicators and the postmortem interval can be analyzed. This paper systematically reviewed the research on the correlation between postmortem imaging features of organs and tissues (such as brain, heart, aorta, lung, liver, etc.) after postmortem computed tomography of various parts of the cadaver (head, chest, abdomen) and the estimation of postmortem interval, in order to provide new ideas for the study of the estimation of postmortem interval and further expand the application of virtual anatomy technology in the field of forensic pathology.

  • Research Articles
    LI Chenyang, DING Dongsheng, ZOU Guangfa, WANG Kewen, FENG Lei, GUO Xiangqian, JI Anquan
    Forensic Science and Technology. 2025, 50(2): 154-161. https://doi.org/10.16467/j.1008-3650.2024.0024

    This paper aims to validate the multiplex amplification system of 9 CpG sites reported in the literature, and explore its applicability in the Chinese population. The SNaPshot multiplex amplification system was validated in terms of accuracy, analysis of the initial amount of converted DNA templates, and detection of mixed samples. A total of 236 samples of five types of body fluids including saliva, semen, blood, vaginal secretion, and menstrual blood were selected. The SNaPshot multiplex amplification system was used to detect the methylation values of 9 CpG sites. The detection threshold of CpG sites was that the methylation value is greater than 0.1. Analyze the starting amount of transformed DNA templates in this system after converting DNA using sodium bisulfite (template amount ranging from 0.5 ng to 10 ng). DNA extracted from four body fluids, including saliva, semen, blood, and vaginal secretion, were mixed in the following ratios: 1∶1, 1∶5, 1∶10, and 1∶20. Finally, the detection data set of 232 samples of the five types of body fluids was used to optimize the existing body fluid source determination method. The train set (n=162) was used to construct a random forest model, and the test set (n=70) was used to predict the body fluid type and evaluated the predictive performance of the model. Furthermore, an external data set (n=40) was added to validate the prediction model. In saliva, semen, blood, vaginal secretion samples, and menstrual blood, the body fluid type was determined directly based on the specific sites of body fluids, and the accuracy rates of body fluid identification were 100%, 98%, 98%, 94% respectively. Due to the influence of the menstrual cycle, some sites were missing, and the average accuracy of menstrual blood identification was 21%. This system could effectively detect the amount of transformed DNA from 1 ng to 10 ng. Among the mixed sample, both body fluid sources were correctly identified in all 1∶1 mixed samples. The main components could be detected in the other mixed samples (ratio 1∶5, 1∶10 and 1∶20), while there was a significant difference in the detection of secondary components. A random forest model was built from 232 samples, and the accuracy of identifying the five fluid sources in both the test and validation sets was 100%. The above results show that the multiplex amplification system has high accuracy for the identification of saliva, semen, blood and vaginal secretion, and is suitable for the identification of trace samples, mixed samples (ratio 1∶1) or main components of other ratios. Compared with direct interpretation based on body fluid specific peaks, the new random forest model can better identify menstrual blood. In summary, the multiplex amplification system for tissue identification of five types of forensic body fluids based on DNA methylation is potential for good forensic application.

  • Research Articles
    WANG Guiqiang
    Forensic Science and Technology. 2025, 50(2): 111-123. https://doi.org/10.16467/j.1008-3650.2024.0032

    The likelihood ratio paradigm of facial similarity score is the theory and method for interpreting the evidential significance of score finding from facial comparison. Facial similarity score likelihood ratio is the latest method of Bayesian likelihood ratio paradigm for forensic science. The likelihood ratio (LR) of facial similarity score is the ratio of the occurring probability of the facial score finding quantitatively assigned based on the probability distribution data of facial scores, under a pair of conflicting propositions that usually represent the claims of the prosecution and defense parties. The propositions typically deal with the question of whether a facial image with unknown identity collected at a crime scene comes from a suspect with known identity. The face score LR expresses the relative support direction and strength of the face score finding for the propositions of the prosecution and defense parties, providing quantitative evidence value for decision-makers to determine the disputed fact of the face source. The decision maker determines the fact of the facial source proposition based on the facial score LR opinion, or the posterior probability of the facial source proposition derived from the LR and the prior odds through Bayesian law, combined with other evidence, to exclude reasonable doubt. The likelihood ratio paradigm of facial similarity score is completely different from the traditional paradigm we are accustomed to in terms of scientific logic, opinion formation, expression, understanding, and reasoning applications. It also differs from the widely used LR paradigm of DNA feature findings, which poses new requirements and challenges for forensic examiner and decision-makers.

  • Research Articles
    CHAI Wei, YANG Minghao, HAN Shenfei, HE Fangzhou
    Forensic Science and Technology. 2025, 50(2): 124-131. https://doi.org/10.16467/j.1008-3650.2024.0044

    Detecting abnormal behavior is crucial for maintaining public security, especially in densely populated critical areas. Traditional target detection algorithms often struggle to deliver satisfactory results under these conditions due to challenges like dense target distribution, significant scale variation, and complex backgrounds. YOLOv8 is one of the better perforing detection models effect among the object detection models. This study introduces a novel approach to improve detection accuracy by integrating advanced mechanisms into the YOLOv8 backbone network. Firstly, the coordinate attention (CA) mechanism is incorporated into the C2f module of the backbone network. This enhances the network’s focus on targets amidst complex backgrounds by emphasizing relevant features and suppressing noise. Secondly, the swin transformer model is integrated into the YOLOv8 backbone. The swin transformer facilitates greater information interaction across the feature map, effectively utilizing the background information and improving object detection accuracy under complex scenarios. The datasets used in the experiments are described, the evaluation indexes of P, R, AP and mAP are listed, and ablation experiments and comparative experiments are carried out. Experiments demonstrate the feasibility and effectiveness of these improvements. The enhanced network is compared with several mainstream networks, showing a significant improvement in average accuracy, reaching 95.1%. Compared to the basic network YOLOv8, the average precision has been improved by 2.4%, which proves the effectiveness of this method. In summary, the innovative integration of the CA mechanism and Swin Transformer model into the YOLOv8 backbone network addresses key challenges in detecting abnormal behavior in densely populated and complex environments. These enhancements lead to improved detection accuracy, making it a promising approach for public security applications.

  • Technology and Application
    WU Qingling, YI Peng, CHEN Zhi, ZHANG Chi, QUAN Zongxue, XIAO Li, MA Jingjing, ZHENG Lei, WANG Yuan, WANG Le, YE Jian
    Forensic Science and Technology. 2025, 50(2): 206-210. https://doi.org/10.16467/j.1008-3650.2025.2004

    Currently, capillary electrophoresis-based Y-STR genotyping kits can detect up to 40 Y-STR loci simultaneously. However, these kits only report length-based genotypes and are unable to provide STR sequence information. The STRSeqTyperY68 kit, designed for forensic male pedigree differentiation using next-generation sequencing technology, excels at genotyping 67 Y-STR loci plus a sex-determinant locus in a single-tube reaction on the MiSeq FGx sequencing platform. It simultaneously provides both length and sequence polymorphism genetic information, simplifies testing procedures, enhances efficiency, and facilitates precise differentiation of male family lineages. The ITO method is commonly used to calculate the kinship index of two individuals’ biological relationship based on Mendel’s law of genetic segregation. Additionally, it evaluates consanguineous relationships within five degrees of kinship between two individuals. The combination of next-generation sequencing technology and the ITO method can effectively narrow down the range of potential families. This paper documents a rape and murder case that remained unsolved for eight years. By cooperatively utilizing the STRSeqTyperY68 kit and the ITO method, the potential connection between crime scene evidence and reference samples was evaluated. Gradually, the investigative leads were narrowed down, leading to the resolution of the case.

  • Technology and Application
    JIANG Linfang, LUO Zhanjun, HAN Xueli, ZHANG Mengting, DONG Shaoxiong, WANG Bin, TU Zheng, WANG Haisheng, HE Baifang
    Forensic Science and Technology. 2025, 50(2): 211-214. https://doi.org/10.16467/j.1008-3650.2025.2005

    In this study, we report a three-step mutation at D8S1132 locus between an alleged father (AF) and child in a paternity case. Three autosomal STR multiplex amplification kits were used for capillary electrophoresis detection. At the D8S1132 locus, the genotypes were: alleged father 17/23, parent mother 22, and child 20/22, which does not comply with Mendelian inheritance laws. Paternity was confirmed by calculating the cumulative paternity index and a three-step mutation was identified at the D8S1132 locus. Next-generation sequencing was further used to validate the experimental results and explore the source of the mutation. The results indicated that the allele 20 of the child was derived from the allele 23 of the alleged father. In addition, next-generation sequencing platform that simultaneously detected different genetic markers such as STR, SNP, and mitochondria DNA increased the cumulative parental index, which further confirmed the paternal relationship. Therefore, this case suggests that in paternity testing, when multiple mutations occur at a certain STR locus, capillary electrophoresis and next-generation sequencing can be combined for cross-validation to improve the credibility of the identification results.

  • Research Articles
    WANG Chenhao, HE Fangzhou, JIAN Zhongyi
    Forensic Science and Technology. 2025, 50(1): 41-47. https://doi.org/10.16467/j.1008-3650.2024.0006

    Within the realm of law enforcement, the utilization of WeChat data has emerged as an indispensable investigation tool, extensively employed in crime investigations and clues tracking. This paper focuses on the information shared by WeChat users in their Moments, with particular attention to interactions between friends. A method for extracting and analyzing clues based on the WeChat Moments relationship network is proposed. Firstly, social connections between users and friends are extracted by analyzing interactions such as likes and comments in WeChat Moments. The WeChat Moments relationship network is then constructed using force-directed graph techniques, providing a visual representation of the relationships between users and their friends. Subsequently, in-depth analysis is conducted through graph clustering and centrality analysis methods. By identifying closely connected individuals, potential associated groups and social circles are revealed, offering key leads for subsequent investigative work. Lastly, focusing on these closely connected individuals, a thorough analysis of their chat records is performed using word cloud technology and the TextRank algorithm. By mining keywords and topics, a more comprehensive understanding of communication content is obtained, aiding in the accurate assessment of the activities and intentions of individuals involved in the case. Through application and validation in real cases, this method demonstrates the ability to rapidly construct the WeChat Moments relationship network, identify closely connected individuals, and perform targeted analysis of their chat records. The results of the experiments show significant achievements in improving the efficiency, accuracy, and depth of lead acquisition, providing robust support for law enforcement investigations. The proposed method, based on the WeChat Moments relationship network, offers new perspectives and technological means for law enforcement investigations. Future work may involve further optimizing algorithms and enhancing the capability to handle large-scale data to adapt to the complex and dynamic nature of criminal environments, thereby providing more effective support for investigative efforts.

  • Review
    ZHOU Bo, YAO Qin
    Forensic Science and Technology. 2025, 50(1): 96-101. https://doi.org/10.16467/j.1008-3650.2024.0005

    Fingerprint, one of the most reliable and valuable evidence in the crime scenes, has long been recognized as a powerful tool for personal identification and worldwide law-enforcing departments to fight against relevant crimes. For many years, in practice, fingerprint analysis has been developed based on the latent fingerprint visualization, primarily. However, most of these fingerprints in crime scenes have been ambiguous, deformed or fragmentary, which contributed to the difficulty in fingerprint analysis. So, recently, some researchers have gradually paid increasing attention on the fingerprint age for fingerprint analysis, though their researches were almost taken placed in the lab. To improve the accuracy and reliability of fingerprint age analysis, and effectively promoting the application of fingerprints age in practice, in this article, according to these researchers’ reports, the morphological characteristics of fingerprints related to the fingerprint age, including two-dimensional (2D) morphological characteristics (e.g. ridge widths and color contrast between ridges and furrows) and three-dimensional (3D) morphological characteristics (e.g. ridge heights), were reviewed, respectively. Furthermore, fingerprint residues which are transferred onto the object surfaces when fingertips touch object surfaces, reveal a wealthy of information, especially fingerprint age. Therefore, the changes of fingerprint residues involving electrical effect, the optical characteristics, as well as the change of compositions such as squareness, wax esters and fatty acids, were also comprehensively summarized in this review. Particularly, the future research directions and prospects were discussed about the methods and the reagents of latent fingerprint development, the equipment and the technologies of fingerprint detection, the composition and the degradation rates of fingerprint residues, and the influencing factors model of age determination of fingerprint.

  • Review
    YU Boyu, WU Yuntao, LIU Li
    Forensic Science and Technology. 2025, 50(1): 81-88. https://doi.org/10.16467/j.1008-3650.2024.0010

    As one of the most commonly used reagents for amino acid detection, ninhydrin has a wide range of applications in forensic science and is a classic and effective method for displaying old fingerprints on permeable objects. However, the traditional ninhydrin display method still needs further improvement in the display effect of latent fingerprints on objects with complex background colors. Based on a review of relevant research results at home and abroad, a brief review was conducted on the composition of fingerprint substances, the mechanism of ninhydrin in fingerprint development, and the improvement of traditional ninhydrin development methods. A detailed review was also provided on the methods for enhancing the development effect since ninhydrin was applied in the field of fingerprint development. The innovation of the traditional ninhydrin solution method for enhancing visualization mainly manifests in three aspects: 1) optimization of ninhydrin solution method reagent formula, such as screening of the best solvent, exploring the optimal concentration, and discussing the influence of pH value on visualization effect; 2) The innovation of ninhydrin display methods, such as solid medium method, ninhydrin vacuum fumigation method, spray display method, etc., mainly solves the problems of carbonization interference and background ink interference in thermosensitive paper; 3) The fingerprint enhancement treatment using ninhydrin, mainly includes metal salt enhancement, rare earth-Ruhmann’s Purple violet coordination compound enhancement, and trypsin enhancement. The metal salt enhancement method and the rare earth-Ruhmann’s Purple coordination compound enhancement method have great research potential in the future development trend of latent fingerprints.

  • Review
    WANG Yanan, ZHANG Qingxia, ZHAO Yi
    Forensic Science and Technology. 2025, 50(1): 89-95. https://doi.org/10.16467/j.1008-3650.2024.0015

    From Sanger sequencing to high-throughput sequencing, the rapid development of sequencing technology has been providing better technical support for combating crime through forensic DNA analysis. In recent years, the third-generation sequencing technology, mainly based on nanopore sequencing technology, has been widely applied in life science research, in vitro diagnosis, public health, food safety and other fields. Nanopore sequencing technology with super-long reading and real-time sequencing has great potential in the field of forensic genetics. Many authorities and experts have already realized the great potential of nanopore sequencing applications for forensic purposes, although its application in forensic science is still in its infancy. There is little relevant research literature in the field of forensic science, and we should research and explore it further. In this article, the authors attempt to describe the basic principle and characteristics of nanopore sequencing technology, and share the updates of nanopore sequencing-based STR typing, MH typing, mtDNA, DNA methylation and RNA sequencing during the past several years. Meanwhile, non-human genetic material can provide medical examiners with special evidence and clues. The past decade has witnessed the enormous potential of nanopore sequencing technology in non-human forensic genetics. Especially in the areas of microorganisms, plant, and animal forensics, the application of nanopore sequencing to species identification can exert a huge implication, and provide the vital evidence and clues for the public security. In addition, nanopore sequencing has been used to detect viruses at the scene. In the field of forensic genetics, the nanopore sequencing with portability and real-time sequencing makes it most likely to sequence directly of biological samples at the crime scene. This development of the nanopore sequencing has opened up new possibilities by bringing “the laboratory into the field”. This draw the incomparable attraction to the practical application in public security. Moreover, several problems with nanopore sequencing in forensic genetics are discussed, including complex data analysis, high error rate, high sample quality requirements, and analytical methods, and there is a certain distance from the daily application of forensic genetics, which need in-depth research. Finally, we hope that this review can provide a reference for related research and applications, opening up ideas for relevant personnel.

  • Research Articles
    WANG Guiqiang
    Forensic Science and Technology. 2025, 50(1): 21-32. https://doi.org/10.16467/j.1008-3650.2024.0031

    The subjective likelihood ratio paradigm of pattern features is the theory and method for the interpretation of evidence significance of feature findings of pattern evidence. Subjective likelihood ratio (LR) of pattern features is the probability ratio of occurrence of pattern features assigned under the two opposing propositions representing the prosecution and defense respectively, based on expert knowledge or the combination of expert knowledge and data. The proposition hierarchy for evaluating the subjective LR of pattern features includes source level and active level. Subjective LR of pattern features expresses the relative support direction and intensity of the pattern features findings for the propositions of the prosecution and defense, providing qualitative evidence value for decision makers to determine disputed factual propositions. Decision makers will determine the propositions facts based on subjective LR opinions of pattern features, or based on the posterior probabilities of the propositions derived from LR through Bayes’ theorem, combined with other evidence in the case, in the way to exclude reasonable doubts. The subjective LR paradigm of pattern features differs significantly from the traditional paradigm we are accustomed to in terms of scientific logic and the formation, expression, understanding, and reasoning application of opinions, and it also differs from the objective LR paradigm of DNA feature. This presents new requirements and challenges for forensic examiner and decision-makers.

  • Special Topic: The Application of Artificial Intelligence in Forensic Science (I)
    YAN Shengdong, DU Weijun, PENG Silu, MENG Xiangchao, XIE Pengda, WANG Mingzhi, LI Guan, SHI Yi
    Forensic Science and Technology. 2025, 50(1): 16-20. https://doi.org/10.16467/j.1008-3650.2025.1003

    This paper explores the progress of artificial intelligence technology in the identification and reconstruction of crime scene elements. With the development of information technology, there are challenges faced by crime scene element identification and reconstruction. The paper discusses the application benefits of artificial intelligence, the relevant applications of artificial intelligence in forensic examination, and outlines the key steps of artificial intelligence in crime scene element identification and reconstruction, to explore the possibility of applying this method to crime scene element identification and reconstruction. Finally, the paper looks forward to the future development of artificial intelligence in forensic examination and suggests that it may play an important role in improving the intelligence level of crime scene examination and increasing the efficiency of case investigation. It is hoped that relevant research will provide a solution for the technical transformation of crime scene examiners and lay a foundation for the intelligent and digital development of forensic technology.

  • Special Topic: The Application of Artificial Intelligence in Forensic Science (I)
    LI Wei, XIE Lanchi, LI Zhihui, HAO Can, LI Zhigang, HOU Chenggang
    Forensic Science and Technology. 2025, 50(1): 8-15. https://doi.org/10.16467/j.1008-3650.2025.1002

    With the intensive integration of deep learning and computer vision, a series of advanced technologies such as facial recognition, image (video) generation, and image classification, have made rapid progress. However, deep learning models are considered “black box models” due to their difficulty in explaining internal processes and predicting results, which poses a serious challenge to the interpretability of image evidence in the field of forensic science. Based on this, this review outlines an overview of interpretability issues based on deep learning. Emphasis was placed on the theoretical and methodological research on the interpretability of facial features based on deep learning both domestically and internationally, such as saliency maps method, perturbation-based method, and score/statistics-based method. Their applications in facial recognition and other related fields, especially in the field of forensic science portraits, were summarized. This review proposes the problems of facial feature interpretability methods based on deep learning models, and looks forward to the future development direction of facial feature interpretability based on deep learning.

  • Special Topic: The Application of Artificial Intelligence in Forensic Science (I)
    ZHAO Hemiao, YAO Lan, BAI Yifan, SUN Hui, HU Lan
    Forensic Science and Technology. 2025, 50(1): 1-7. https://doi.org/10.16467/j.1008-3650.2025.1001

    With the swift progress of artificial intelligence (AI), the field of forensic DNA examination is witnessing a technological transformation. AI has been integrated into multiple facets of forensic DNA analysis, encompassing intelligent DNA expert systems, AI-assisted optimization of examination procedures, innovative AI-assisted DNA statistics and analysis, rapid electrophoresis data analysis powered by AI, complex mixture sample analysis, and big data inference models. These advancements have significantly enhanced the precision and efficiency of forensic DNA testing. However, the integration of AI has also introduced challenges such as data privacy, model interpretability, algorithmic bias, and legal regulation. Addressing these issues necessitates close collaboration among forensic DNA experts, bioinformatics specialists, and AI professionals. Additionally, it requires the establishment of appropriate legal and regulatory frameworks to ensure that AI applications adhere to ethical standards and effectively support judicial fairness. This article provides an in-depth examination of the application of AI in forensic DNA analysis and the challenges it presents. It analyzes specific case studies to illustrate how AI contributes to the automation and intelligence of forensic DNA analysis, while also highlighting potential risks and challenges. The paper aims to offer guidance and references for the application of AI in the forensic DNA field.

  • Research Articles
    WANG Guiqiang
    Forensic Science and Technology. 2024, 49(6): 551-565. https://doi.org/10.16467/j.1008-3650.2024.0030

    The objective likelihood ratio (LR) paradigm of DNA feature is the theory and method for interpreting the evidential significance of DNA analysis findings. It is a typical representative of the forensic Bayesian likelihood ratio paradigm. The DNA feature’s objective LR is the ratio of the probability of DNA feature findings quantitatively assigned based on model and data, in the context of two alternative propositions typically advocated by both of the prosecution and defense. The hierarchy of propositions includes the sub-sub source level, sub-sources level, sources level, and activity level. The LR of DNA feature findings expresses the relative support direction and strength of the DNA findings for the propositions of the prosecution and defense, providing quantitative evidence value for decision-makers to determine disputed propositional facts. Decision makers will determine the propositional facts without reasonable doubt, based on the LR opinions of DNA findings or the posterior probability of the propositions derived from LR opinions and Bayesian laws, and combined with other evidence in the case. The DNA feature’s objective LR paradigm is completely different from the traditional paradigm that we are used to in terms of scientific logic, opinion formation, expression, understanding, and reasoning applications, which poses new requirements and challenges for forensic examiner and decision-makers.

  • Research Articles
    LI Kang, CHEN Shitao, LUO Yaping
    Forensic Science and Technology. 2024, 49(6): 566-573. https://doi.org/10.16467/j.1008-3650.2024.0003

    The establishment of a scientific quantitative evaluation system for fingerprint evidence, especially how to introduce the statistical method of likelihood ratio into the digital representation of fingerprint identification, is a hot issue in the current theoretical and practical research of forensic science. The construction of a scientific and effective likelihood ratio evaluation model for fingerprint evidence requires rich same-source and different-source fingerprint databases to obtain the likelihood function with a stable distribution law, thus, the quality of the same-source and different-source databases directly affects the performance of the likelihood ratio model. By using the live-scan fingerprint collector and screen recording software to obtain more than 1 000 distorted fingerprint images for each fingerprint in different distortion modes, a total of 200 000 same-source fingerprints are obtained from 200 simulated fingerprints, which constitutes the same-source fingerprint database; and the different-source fingerprint database consists of ten million people’s ten-fingerprint database in policing practice. On this basis, the automatic fingerprint identification system is utilized for query and comparison, and the comparison score data are evaluated. The experimental results show that the fingerprint data of different distortion modes have significant differences; the degree of pressure and the impressing time have little effect on the comparison scores of fingerprints. From the results of statistical analysis of the total number of samples and the subsample data after different degrees of reduction, it can be seen that the number of same-source samples of each fingerprint can still form a stable distribution law when the number of fingerprints is as few as 155. Therefore, the database we built is rich in the number of same-source and different-source fingerprints, reasonable in structure, and has the data basis for forming a stable distribution law, which is helpful for the subsequent establishment of the likelihood ratio evaluation model.

  • Research Articles
    GUAN Xu, ZHU Huanhui, PENG Cong, SUN Limin, LIN Xianwen, WANG Songcai
    Forensic Science and Technology. 2024, 49(6): 574-579. https://doi.org/10.16467/j.1008-3650.2024.0004

    Peganum harmala is a plant widely distributed in the arid areas of northwest China, the alkaloid harmaline and harmine contained in which shows toxic effects in human such as acute central neurological symptoms, cardiovascular effects and death. In order to meet the qualitative and quantitative detection requirements for cases of Peganum harmala poisoning, a high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) was developed for the simultaneous detection of harmaline and harmine in human whole blood. The blood samples were deproteinized with acetonitrile, separated on an analytical column (Agilent SB-C18 100 mm×2.1 mm×1.8 μm) with 0.1% formic acid and acetonitrile as the mobile phase by gradient elution at a flow rate of 0.4 mL/min, determined by electrospray ionization (ESI+) and multiple reaction monitoring (MRM) modes after diluted and filtrated through membrane, and quantified by external standard calibration method. The result showed that there was good linear relationship for harmaline and harmine in human whole blood over the range of 10 to 1 000 ng/mL (R2>0.999). The limit of detection (LOD) and quantitation (LOQ) of harmaline and harmine in human whole blood were 3 and 10 ng/mL, separately. At 10, 100, and 500 ng/mL, the recoveries of harmaline and harmine showed a range of 84.9% to 97.0% and the matrix effects of different human whole blood were less than ±25%,with the precisions of intra-day and inter-day ranged from 3.6% to 9.8% and 7.3% to 12.8% respectively, and the accuracy ranged from 2.4% to 9.5%, which meets the general rules for forensic toxicology in China. The method was successfully applied to a real case of abnormal death caused by improper use of Peganum harmala, where the concentration of harmaline and harmine in the blood of the deceased were detected to be 1.2 μg/mL and 0.2 μg/mL respectively. In conclusion, the results of this study could be applied to forensic science on the simultaneous detection of harmaline and harmine in human whole blood with the advantages of simplicity and accuracy and provide reliable technical support for the subsequent investigation of practical cases.

  • Research Articles
    GAO Yang, LI Xizhu, HUANG Lichuang, XIE Yang, YANG Chaopeng
    Forensic Science and Technology. 2024, 49(6): 580-585. https://doi.org/10.16467/j.1008-3650.2023.0088

    Adult bone age assessment is an important part of forensic anthropological age assessment, which includes items such as adult living body bone age assessment, unknown cadaver bone age assessment, and ossified cadaver bone age assessment. Traditional assessment methods are mainly applicable to unknown and ossified cadavers. Adult living body age assessment requires extracting high-definition models of pelvic volume reconstruction (VR), and then referring to traditional methods for assessment. The purpose of this study is to explore the necessary conditions for establishing a high-definition model extraction method for pelvic volume reconstruction, which solves the problem of setting parameters for multi-slice spiral CT (MSCT) scanning and reconstruction in the process of pelvic 3D imaging sampling for adult bone age assessment. This study used Precision32 (Kaiying Company) MSCT with protocol fixed tube voltage of 120 kV, tube current of 250 mA, pitch of 1.05, layer thickness of 1.1 mm, spacing of 0.8 mm, and matrix 512 × 512 to scan fresh corpse pelvis, and adjust reconstruction parameters such as filtering, layer thickness, layer spacing, matrix, etc. to compare the clarity of pelvic volume reconstruction images, and select the necessary parameter values. The research results indicate that filtering, layer thickness, layer spacing, matrix, etc. have a significant impact on the clarity of volume reconstruction images. Choosing pelvic filtering, thin layer thickness, short layer spacing, and large matrix can improve the clarity of pelvic VR models. Therefore, under the premise of using scanning parameters such as protocol tube voltage and tube current, it’s necessary to choose pelvic filtering, thin layer thickness, short layer spacing, large matrix, etc. to establish a high-definition model for pelvic volume reconstruction.

  • Research Articles
    CHEN Xiyue
    Forensic Science and Technology. 2024, 49(6): 586-593. https://doi.org/10.16467/j.1008-3650.2024.6022

    When dealing with major emergencies, it poses the greatest challenge for public security law enforcement departments to timely, accurately and intelligently process the massive and complicated multi-source heterogeneous police data transmitted by various front-end collection devices. Under the background of major emergencies, the effectiveness of police work hinges on the real-time data collection, the accuracy of processing, and the intelligence of decision-making. This necessitates a police data system capable of rapid response, accurate analysis, and intelligent decision-making. However, current police data processing systems still rely on manual screening and analysis, resulting in inefficiencies during major incidents. Therefore, how to leverage massive police data and fully extract valuable information from it is one of the difficult problems to be solved in the process of police modernization. This paper first conducts a thorough analysis of the current situation and problems of police data application in China’s police work during major emergencies, followed by the proposal of countermeasures for the in-depth application of police data. Ultimately, digital twin and knowledge graph technology are combined to realize the intelligent analysis and application of massive police data, thereby establishing a new policing model with capabilities for security risk perception and situation analysis.

  • Research and Discussion
    ZHANG Chi, KANG Kelai, LI Bei, SUN Boya, MIAO Lei, JIAO Ruilian, MENG Yang, ZHAO Jie, HE Lin, JI Anquan, WANG Le
    Forensic Science and Technology. 2024, 49(6): 639-644. https://doi.org/10.16467/j.1008-3650.2024.0001

    STR genotyping based on the traditional capillary electrophoresis technology focuses on length polymorphisms of the markers. Sequence differences among STR isometric alleles, such as SNPs and InDels, cannot be reported. Next-generation sequencing can report abundant sequence polymorphism information of STRs, including the repetitive sequence and non-repetitive interval sequence information in STR repeat regions, as well as sequence information in flanking regions, supporting forensic applications. This article reported a 21-year-long indoor homicide case. Genotypes for 38 Y-STRs were obtained by the traditional Y-STR detection method. Y-STR sequence information was reported by using the STRSeqTyperY68 kit and next-generation sequencing technology. The STRSeqTyperY68 kit genotypes fifty-two single-copy Y-STR loci, six two-copy Y-STRs, one three-copy Y-STR, and one sex determinant locus in a single reaction tube using the MiSeq FGx sequencing platform. Full sequence-based genotypes of 67 Y-STR markers were determined for one evidence collected from crime scene and eight reference samples. After comparing the 67 Y-STR genotyping result of the crime scene evidence with the eight reference samples, we found that the length-based genotypes at DYS448 were 20 among for all these nine samples. Their repeat structures were also consistent, which were the combination of 11 [AGAGAT] repeat units and 9 [AGAGAT] repeat units with a non-repetitive sequence of 42 nucleotides (N42) in the middle. However, the difference lied in the 32nd base of the “folded sequence” N42 of DYS448: scene evidence and sample 1 were C, while the other seven reference samples were T. Based on a single base substitution in the N42 ‘folded sequence’ of the DYS448 locus, the case was directed and key technological support was provided. This article further explored sequence variations in the N42 ‘folded sequence’ of DYS448 in different populations, as well as detailed information on loci with ‘folded sequence’ in the STRSeqTyperY68 kit, providing reference for related research and case applications.

  • Technology and Applications
    LIU Zhenping, FU Yanfang, TONG Jijun, ZHAI Xiandun
    Forensic Science and Technology. 2024, 49(6): 651-655. https://doi.org/10.16467/j.1008-3650.2024.6026

    Five samples from a fourth-generation pedigree were tested and reviewed using three different reagent kits:ABI Yfiler Huaxia Platinum, AGCU Y SUPP Plus and SureID PathFinder Plus kit, Y-STR profiles were obtained for these samples. We observed that one of the samples exhibited different genotypes at four Y-STR loci (DYF387S1, DYS527, DYF404S1, and DYS459) when compared to the other four samples, and the maximum allelic difference was 11 steps. The other samples showed “two bands” at these loci, while this particular sample exhibited “one band.” This observation did not conform to the stepwise mutation model. Upon further examination of the Y-chromosome sequence-tagged sites for this sample, we detected a partial deletion (gr/gr) in the AZFc region of the Y chromosome. Based on the family inheritance structure analysis, we concluded that this partial deletion occurred during paternal transmission as a fragmentary deletion. Partial deletions in the AZFc region can result in inconsistencies between the genotyping of multicopy loci, such as DYF387S1, located in that region, and the genotyping of familial samples. This should be paid more attention in practical work. If necessary, Y-chromosome STS testing can be performed on the sample, providing a scientific basis for investigating Y-STR profiles in families.

  • Research Articles
    ZHAO Yixia, WANG Zhe, HU Sheng, ZHAO Li, YE Jian, SUN Qifan, JI Anquan
    Forensic Science and Technology. 2024, 49(6): 594-601. https://doi.org/10.16467/j.1008-3650.2023.0082

    Body fluid stains are common biological materials at crime scenes. The accurate determination of their tissue sources can help with crime scene reconstruction, case nature determination and trial. The analysis of cell-specific mRNA expression has been proposed as promising method for the identification of body fluids. Conventional strategy of mRNA profiling requires reverse transcription, PCR amplification, and electrophoresis. The one-step RT-PCR detection technology can complete reverse transcription and PCR of mRNA in one reaction, which can reduce experimental time and simplify experimental operations. In this study, we subjected the one-step multiplex reverse transcription PCR strategy to mRNA profiling with the inclusion of 10 tissue specific biomarkers in the F13plex system targeting peripheral blood (HBA, HBB), menstrual blood (MMP7, MMP10), vaginal secretion (HBD1, CYP2B7P), saliva (STATH, HTN3) and semen (PRM2, SEMG1), and 3 housekeeping genes (ACTB, GAPDH and RPL19). We verified the system’s specificity, sensitivity, and ability to detect mixed and aged samples. In terms of specificity, most of the selected genes had good specificity, but there were some cross-reactions that were hard to avoid. In order to ensure the accuracy of identification, we determined that the target body fluid was contained only when the three housekeeping genes and two specific mRNA markers were simultaneously detected. In terms of sensitivity, we found that different types of samples had different sensitivities. For example, when using 10 ng RNA for vaginal secretions and menstrual blood samples, some specific target genes were not detected and could not be correctly determined; for blood samples, even when 0.01 ng RNA was used, the RFU value of the target gene is still above 10 000. However, there are only a small amount of test materials in actual cases; it is difficult to quantify the extracted RNA. For five kinds of body fluids, 2 μL of RNA extracted from a 1 mm2 sample could all detect housekeeping genes and corresponding target genes, and the correct body fluid could be determined (Except 1 vaginal secretion sample). The target genes of the mixed components were detected in all 16 mixed samples, and correct mixed component determinations could be made, proving the system’s good ability to identify mixed samples. Almost all housekeeping genes could be detected in 14 aged samples, but only 7 were correctly identified. Because no specific target genes were detected in the remaining samples, correct determinations could not be made. Consequently, the system needs to be further optimized. For HBD1 and MMP7 with poor specificity, other vaginal secretion and menstrual blood-specific genes will be screened and verified for replacement. The instability of mRNA results in relatively poor test results for aged samples. In practical applications, other genetic markers with better stability should be used to determine the results. In general, lots of studies have been demonstrated the usability of mRNA profiling to the identification of forensic relevant body fluid. According to the comprehensive assessment of the one-step RT-PCR strategy in this study, the one-step profiling assays can be a reliable and economical method for the simplified, accurate, and simultaneous analysis of tissue-specific biomarkers for the discrimination of body fluid origin. It shows good application potential in forensic body fluid identification work.

  • Research and Discussion
    LI Yongjiu, ZHANG Guangfeng, DOU Xueli, LIU Hongdi, PENG Zhu, LIU Zhifang, TU Zheng
    Forensic Science and Technology. 2024, 49(5): 514-518. https://doi.org/10.16467/j.1008-3650.2023.0073

    DNA test of aged skull is always a difficult problem in the field of forensic science, as the skull contains little DNA, and the DNA degrades badly. Here, an improved method is introduced for DNA test of aged skulls. Taking the skulls in recent cases for example, the optimization was carried out from the selection of materials, concentration and recovery of demineralized solution, high volume extraction and purification of DNA, etc. Finally, the results of autosomal and Y chromosome STR polymorphism were obtained successfully. The results showed that petrosal part of temporal bone could be the preferred extraction site for the test of aged skulls, with a higher success rate compared with other parts of the skull; in addition, the use of Amicon Ultra-15 10K centrifugal ultrafiltration tubes could remove a large number of small molecules, such as water and ions in the decalcification solution, so as to retain and recycle the large molecules of DNA, which effectively reduced the amount of DNA loss; furthermore, the quality of DNA was poor in aged skulls, so increasing the amount of bone powder used and increasing the extraction and purification system could also significantly improve the amount of DNA recovered. This method improves the quantity and quality of DNA recovered from aged skull, and can provide reference for the follow-up DNA test of similar aged bones and teeth.

  • Research Articles
    LIU Zhenping, TONG Jijun, ZHAI Xiandun
    Forensic Science and Technology. 2024, 49(5): 480-484. https://doi.org/10.16467/j.1008-3650.2023.0076

    In this paper, the characteristics of allele typing deletion of Y-STR loci and its correlation with AZF (azoospermia factor) deletion were discussed, which could provide reference for forensic practice. Y-STR kits (Yfiler Platinum, SureID PathFinder Plus) were used to analyze the blood samples of 23 461 male family members. A total of 14 cases with 4 or more Y-STR allele-dropout samples were found. Meanwhile, Sequence-tagged site (STS) was detected with Y chromosome microdeletion detection kit, and the deletion of AZF region of Y chromosome was evaluated according to the deletion STS. The results showed that the proportion of multiple Y-STR typing missing was 0.059 7% (14/23461), with 1 case of short arm and 13 cases of long arm, which were from different families with different types. STS deletion was detected in the AZF region in 13 long arm multiple typing deletion samples, and no abnormality was detected in 1 short arm multiple typing deletion sample. This study suggests that there is a correspondence between multiple STR allele dropouts in the long arm of Y chromosome and microdeletions in AZF region, and the biological basis of sterility exists in these typing individuals.

  • Research Articles
    WANG Guiqiang
    Forensic Science and Technology. 2024, 49(5): 441-455. https://doi.org/10.16467/j.1008-3650.2024.0029

    The interpretation paradigm of forensic findings is undergoing a shift from the traditional paradigm of categorical conclusions to the likelihood ratio paradigm of forensic findings. The forensic likelihood ratio paradigm, DNA evidence as a typical one, has improved the validity of forensic opinions and transformed the reasoning and proof mode of forensic opinions. The forensic likelihood ratio paradigm is based on the Bayesian likelihood ratio framework. The likelihood ratio (LR) of the forensic findings is assigned under a pair of opposing source or active propositions advocated by the prosecution and defense, and the LR is used as the evidence value of forensic findings relatively supporting the propositions of both prosecution and defense parties. It provides LR opinions of forensic findings to help decision-makers infer and determine disputed propositional facts on the source or active level. The forensic likelihood ratio paradigm is completely different from traditional paradigms in terms of scientific basis, opinion formation, expression, understanding, and reasoning application, which poses new requirements and huge challenges for forensic examiner and decision-makers in criminal proceedings.

  • Research Articles
    ZHANG Juntao, YANG Xingyi, YU Zhengliang, ZHAO Peng, LIU Dayu, HAN Xiaolong, SUN Hongyu, LIU Chao
    Forensic Science and Technology. 2024, 49(5): 456-463. https://doi.org/10.16467/j.1008-3650.2023.0084

    The AGCU X Plus STR system is a newly developed multiplex PCR kit that detects 32 X-chromosomal STR loci simultaneously. These are DXS6807, DXS9895, linkage group 1 (DXS10148, DXS10135, DXS8378), DXS9902, DXS6795, DXS6810, DXS10159, DXS10162, DXS10164, DXS7132, linkage group 2 (DXS10079, DXS10074, DXS10075), DXS981, DXS6800, DXS6803, DXS6809, DXS6789, DXS7424, DXS101, DXS7133, GATA172D05, GATA165B12, linkage group 3 (DXS10103, HPRTB, DXS10101), GATA31E08 and linkage group 4 (DXS8377, DXS10134, DXS7423). A major advantage of this kit is that it takes into account linkage between loci, in addition to detecting more X-STR loci. In order to evaluate the forensic application of 32 X-STR fluorescence amplification system, PCR settings, sensitivity, species specificity, stability, DNA mixtures, concordance, stutter, sizing precision, and population genetics investigation were evaluated according to the Scientific Working Group on DNA Analysis Methods(SWGDAM) developmental validation guidelines. The study showed that the genotyping results of each locus were significantly accurate when the DNA template was at least 62.5 pg. Complete profiles were obtained for the 1∶1 and 1∶3 combinations. A total of 209 unrelated individuals from Southern Chinese Han community, consisting of 84 females and 125 males, were selected for population studies, and 285 allele profiles were detected from 32 X-STR loci. The polymorphism information content (PIC) ranged from 0.272 1 in DXS6800, to 0.910 5 in DXS10135, with an average of 0.679 8. DXS10135(PIC = 0.910 5)was the most polymorphic locus, with discrimination power(DP)of 0.916 4 and 0.987 1 for the male and female. The cumulative PDF, PDM, MECtrio and MECduo values were all greater than 0.999 999 999. There were 78 different DXS10103-HPRTB-DXS10101 haplotypes among the 125 males, and the haplotype diversity was 0.9810. There was no significant difference in the cumulative PDF, PDM, MECtrio and MECduo values whether considering linkage or not. In summary, the new X-STR multiplex typing system is effective and reliable, which can be useful in human genetic analysis and kinship testing as a potent complement to autosomal STR typing.

  • Research Articles
    DONG Ying, CAO Zhenyu, HE Li, MIAO Liwen, LIU Jian, ZHANG Hongmin
    Forensic Science and Technology. 2024, 49(5): 464-471. https://doi.org/10.16467/j.1008-3650.2023.0085

    The detection method for simultaneously analyzing 10 bufotoxins (arenobufagin, gammabufotalin, resibufagin, desacetylcinobufagin, resibufogenin, cinobufotalin, bufotalin, cinobufagine, bufalin, cinobufaginol) in biological tissues samples was established using ultra-high performance liquid chromatography-quadrupole/electrostatic field orbital trap high resolution mass spectrometry (UPLC-Q/Exactive MS) with liver as the sample. 0.8 mL of acetonitrile was added in 0.2 g of liver to precipitate proteins, and then was purified with a hybrid solid phase column. ACQUITY UPLC HSS T3 column (2.1 mm×100 mm×1.8 µm) was selected. The mobile phase A consisted of 0.1% formic acid in water and B consisted of acetonitrile with a gradient elution program for separation. The mass spectrum was analyzed by electrospray ion source with positive ion mode. The detection limit of 10 bufotoxins in liver could be less than 10 ng/mL, and the lowest could reach 0.1 ng/mL. These 10 bufotoxins showed good linearity within the range of 20 ng/mL to 400 ng/mL (R2>0.996). The extraction recoveries of 10 bufotoxins from liver ranged from 62.9% to 93.2%, and the matrix effect ranged from 90.9% to 109.0%. The intraday precision was within 9.2%, and the interday precision was within 14.6%. The stabilities of 10 bufotoxins in the matrix were good under repeated freezing condition, thawing condition and room temperature condition, and after extraction processing, with the RSD of concentration all within 15%. After gastric administration, the distribution order of content in male rats was as follows: liver>lung>heart>kidney>genital organ>muscles; in female rats, the distribution order of content was as follows: lung>genital organ>liver>heart>kidney>muscles. This study established a detection method for bufotoxins in biological tissues and revealed the distribution pattern of bufotoxins in animal bodies tentatively, providing a reference for the examination of bufotoxins and the selection of related specimens.

  • Research Articles
    LUO Shuai, LIU Anjie, ZHANG Xingtao, ZHAN Mengjun, LIU Meng, FAN Fei, ZHOU Yuchi, LIU Changyuan, DENG Zhenhua
    Forensic Science and Technology. 2024, 49(5): 472-479. https://doi.org/10.16467/j.1008-3650.2024.0035

    Calculation of the degree of lung compression by Mimics software remains the “gold standard”. In the forensic sphere, due to the complexity of the Mimics software, many people do not utilize this method in forensic practice. They may calculate degree of lung compression by visual observation, represent the result of degree of lung compression by some slicer of CT. These factors will lead to inaccuracies of calculated results. The aim of this study is to develop a model for automatic calculation of lung compression degree based on deep learning semantic segmentation technology, and explore the feasibility of deep learning for lung compression measurement by comparing the results of automatic calculations with Mimics software. In this study, 42 cases of the computed tomography (CT) data including pneumothorax diagnosis in DICOM format were collected each cases has about 350 images with a thickness of 1 mm. Among them, 32 cases used for training and 10 cases used for validation. The air-containing regions of 1943 images were manually annotated. An additional five chest CT cases were selected for external testing. The degree of lung compression was calculated by both the deep learning model and Mimics software, and the correlation between the results of the two methods and the calculation errors were analyzed. In the validation set, the average error between the deep learning model calculation results and the manual method was 2.4%, and the model processed an average of 356 per case with an average time of 60.04 s, while the average error in the test set was 4.4%. The aforementioned results lead to the following conclusions: The deep learning model constructed in this study has the potential to be applied in the automated measurement of the lung compression degree due to pneumothorax, which can provide a reference for the calculation of the lung compression degree due to pneumothorax in forensic practice.

  • Reviews
    YI Shuang, YAO Qin
    Forensic Science and Technology. 2024, 49(5): 485-494. https://doi.org/10.16467/j.1008-3650.2023.0089

    Latent fingerprints are of the most important physical evidence in forensic identification though most of them are invisible, requiring physical or chemical treatments to enhance their visualization. To date, various of methods have been exploited for latent fingerprints to enhance their visualization. However, with the update of criminal methods, the traditional methods of developing latent fingerprints are limited. Among them, the physical developing methods are not sensitive enough to the latent fingerprint on the surface of complex object, and during the developing operation, some powders or toxic substances are easy to float in the air, which destroy the health of the experimenters. The chemical developing methods also have the disadvantages including cumbersome operation, high cost, toxicity and impact on DNA detecting. As specialized materials, nanomaterials have been widely used in medical equipment, electronic equipment and coatings. In the field of latent fingerprint development, nanomaterials have also attracted extensive attention from researchers because of their good adsorption, non-floatability in the air, wide applicability and less impact on DNA detecting. The research progresses home and abroad has been comprehensively reviewed in this article about various nanomaterials in latent fingerprints development. These nanomaterials mainly in latent fingerprints development cover metal nanomaterials, non-metallic nanomaterials, metal oxide nanomaterials and the other nanomaterials, including the mechanism, synthesis methods and adaptability. In addition, the future trends of nanomaterials in the development of latent fingerprints were suggested, in order to further promote the practical application of nanomaterials to develop latent fingerprints.

  • Research and Discussion
    BI Fulun, WANG Wei, QI Yueying, XIE Jiayi, NA Man, WU Jiaquan, LIANG Ying, ZHANG Jianqiang
    Forensic Science and Technology. 2024, 49(5): 507-513. https://doi.org/10.16467/j.1008-3650.2023.0091

    Bloodstain is one of the most important forensic evidences in criminal cases. How to identify the bloodstains and obtain some potential evidence is of great significance to solve the criminal case. In this paper, a hand-held near-infrared (NIR) spectrometer was used to collect the spectral data of different species of bloodstains samples on cotton fabrics with different colors including human blood, chicken blood and pig blood. After collecting the spectral data, standard normal variables (SNV) pre-processing operation was implemented on the spectral data to eliminate the common offset and scaling effects. Then, the training models were established via extreme learning machine (ELM) algorithm to identify the species of bloodstain. Next, the testing samples were predicted by means of using the built specie identification bloodstain model. Meanwhile, the traditional support vector machine (SVM) and genetic algorithm-back propagation (GA-BP) classification algorithms were also used to build the identification model and the prediction results were also compared with ELM algorithm. The experimental results showed that the prediction accuracy of ELM algorithm was 98.48%, which was higher than that of GA-BP algorithm (84.62%) and SVM algorithm (73.84%). Meanwhile, the precision, sensitivity and specificity of the prediction results using ELM algorithm were also much higher than those of SVM and GA-BP algorithms. The above results showed that the accuracy of the identification model built by ELM algorithm was the highest and the overall performance of the model was the best. The research results of the paper showed that he rapid detection method based on a handheld NIR spectrometer and ELM algorithm could identify the types of the bloodstains efficiently, non-destructively, quickly and accurately and it provided a new technical reference for bloodstains detection and identification in criminal cases.

  • Research Articles
    ZENG Haoran, LIU Kangkang, LUO Yaping
    Forensic Science and Technology. 2024, 49(4): 340-349. https://doi.org/10.16467/j.1008-3650.2023.0063

    Fingerprint is the main morphological area on the finger, which plays an important role in daily life. Fingerprint crease is an obvious feature of non - cutaneous ridge configuration and flexor fold in dermatology. It is also called fingerprint white lines by many scholars because the white stripes can interrupt the normal friction ridge in different directions in the fingerprinting. Due to the characteristics of fingerprint crease features had obvious morphological structure and widely exist in the population, while its formation and performance are affected by a variety of factors, finger crease features have gradually entered the vision of researchers at home and abroad in the field of forensic science. To explore the stability and variation of fingerprint crease features is the basic premise of applying finger crease features to fingerprint identification, but there are few basic theoretical studies on finger crease features at present. In this paper, the influence of skin metabolism on the finger crease features was studied, aiming to explore whether the finger crease features have short-term tissue stability during the time of epidermal replacement, and to analyze the identification value and risk of finger crease features. A high magnification digital microscope was used to observe and mark the features of 462 finger crease features in 20 samples for a month. The length and relative position were measured and the coefficient of variation was calculated to analyze the stability of finger crease features. The results showed that about 82% of creases had strong stability, 12% had small changes, and 5% had obvious changes during skin replacement time. The finger crease features were extremely unstable during the desquamation of skin. Most finger crease features form linear trajectories and their relative positions were stable. Therefore, the characteristics of finger crease are both variable and relatively stable. Its application in fingerprint identification has some limitations, but its important value in fingerprint identification cannot be ignored. If the characteristics of finger crease features are good, we can make full use of identification value under the premise of avoiding identification risks, so that the identification of difficult fingerprints can be based on more types of features, thus making the identification opinions more scientific and reliable. At the same time, it should be noted that the blind application of crease features may increase the risk of fingerprinting. The research results of this paper are an effective supplement to the basic theory of finger crease features, and it is helpful to enrich fingerprint examiners’ understanding of finger crease features and the identification value and risk of finger crease features.

  • Research Articles
    HAN Wenqiang, LUO Yaping
    Forensic Science and Technology. 2024, 49(4): 367-374. https://doi.org/10.16467/j.1008-3650.2023.0065

    In recent years, the rapid growth in the capacity of AFIS (Automatic Fingerprint Identification System) databases has led to an increasing difficulty in fingerprint identification, particularly in the interference of Close Non-Matches (CNMs) with homologous fingerprints in the search results. Before using AI fingerprint recognition algorithms, CNMs with higher scores and higher rankings may appear in the candidate list. In order to explore the influence of different AFIS database sizes on the occurrence of homologous fingerprints and CNMs under the condition of traditional comparison algorithm, this experiment established 6-million-people, 10-million-people, and 100-million-people level databases by setting the fingerprint card imprinting time during querying, then initiated querying and discussed the search results of each part of the delta area of the loop (root part, center part, and periphery part), and the annotations of each part were the 10 minutiae closest to the apex of the bottom-type line. The results show that when the capacity of the fingerprint database grows, the occurrence rate of homologous fingerprints decreases, and their ranking decreases at the same time, and the larger the size of the growth of the fingerprint database capacity, the more obvious the degree of decrease. When the capacity of the fingerprint database grows, the number of occurrences of CNMs increases, and the number of corresponding points of CNMs also increases, and the larger the scale of the growth of the fingerprint database, the more obvious the degree of increase. In this experiment, three high-level CNMs with 10 corresponding points were found in the 10-million-people and 100-million-people level databases. When CNMs are ranked before homologous fingerprints, it may cause interference to fingerprint examiners. In addition, it was also found that the number of occurrences of CNMs in the three parts of the delta area of the loop in different databases showed that the root part > the center part > the periphery part, which was related to the density of the minutiae in the three parts. The higher the density of the minutiae, the smaller the distance between the minutiae, and the smaller the area of distribution of the unit number of the minutiae, the higher the probability of the repetition of the same distribution pattern, and the lower the specificity of the minutiae configurations, the easier it is to produce feature similarity. This study aims to improve the risk awareness of fingerprint examiners under big data conditions. In addition to being cautious, the industry may need to do a lot of work from upgrading fingerprint matching algorithms and establishing new fingerprint identification paradigms.

  • Reviews
    SONG Binghui, HE Ting, FU Junjiang
    Forensic Science and Technology. 2024, 49(4): 401-409. https://doi.org/10.16467/j.1008-3650.2023.0067

    Forensic medicine is an applied discipline that uses medical and other related knowledge to provide scientific basis and evidence for criminal investigations, civil disputes, medical disputes, and other related fields involving law, to achieve justice and maintain social harmony and stability. Time-related estimations (TRE), such as estimations of time since death, wound age, and body fluid stain age, are important components of forensic identification, which can provide clues for case investigation, delineation of investigation scope, determination of crime time, and screening of alibis. Epigenetic has received much research in many fields and is highly valuable in forensic medicine because of its special mode of inheritance that is not based on changes of DNA sequences. Non-coding RNA (ncRNA) is currently regarded as an emerging epigenetic marker in a variety of fields, and the function and mechanism of ncRNAs in the physiological and pathological processes of different systems, organs, tissues, and cells revealed. Generally, ncRNAs may be divided into two groups based on their biological roles: housekeeping ncRNAs, which are critical for sustaining fundamental cellular activities, and regulatory ncRNAs, which act as regulators in cells. With the development of molecular biology, molecular genetics and bioinformatics, non-coding RNAs such as microRNA (miRNA) and circular RNA (circRNA) have shown great potentials for forensic identification, and they provide new methods for solving forensic problems of TRE. In this review, we summarized the common detection methods of ncRNAs in forensic medicine, and described the research progress and application of ncRNA such as miRNA and circRNA in estimation of time since death, wound age, and body fluid stain age, which show close links among them and forensic medicine. Additionally, we disscussed the research values and application prospects of non-coding RNA in the TRE.

  • Research Articles
    LI Jian, ZHAO Huanhuan, MA Bin, WANG Chunpeng, WU Xiaoming, ZHANG Xiaobo
    Forensic Science and Technology. 2024, 49(4): 350-358. https://doi.org/10.16467/j.1008-3650.2023.0074

    In the age of ubiquitous smartphones and the proliferation of social media platforms, the importance of image privacy has never been more prominent. The misuse of images by malicious actors has become a pressing concern, which has led to numerous illegal activities such as fraud, stalking, and extortion, and subsequently produced harmful consequences for our society. The primary objective of this study is to devise an efficient algorithm to tackle the predicaments stemming from the compromise of image privacy. We aim to combat these issues by delving into the internal information within images and mitigating the potential harm caused by image privacy breaches. The research primarily focuses on curtailing the identifiability of image source cameras, thereby reducing the potential for malicious image exploitation. Given the prevailing challenges associated with existing PRNU analysis techniques, including diminished quality and low time efficiency, this paper introduces a novel PRNU anonymization algorithm. Initially, the algorithm segments the image and employs a combination of discrete cosine transform (DCT) and Wiener filters to filter the high-frequency components within the image. Experimental results demonstrate that the iterative Wiener filter significantly enhances the efficiency of the anonymization algorithm. The proposed algorithm has been extensively tested on the MICHE-I dataset and the UBIPR iris dataset. The experimental outcomes reveal that the algorithm effectively preserves image privacy while maintaining minimal perceptible impact on image quality. The average improvement in visual quality factor is approximately 3 dB, and it successfully suppresses the traceability of the images. Furthermore, the biometric features of the original images remain intact, further enhancing the efficacy of anonymization. This research is of great significance to the fields of digital forensics and image analysis. The algorithm not only aids in safeguarding the privacy and security of the general public but also holds substantial promise in areas such as forensic investigations and criminal case resolutions.

  • Research Articles
    SHEN Ruidi, LI Peng, HU Wen, QIAN Zhenhua
    Forensic Science and Technology. 2024, 49(4): 359-366. https://doi.org/10.16467/j.1008-3650.2023.0075

    New synthetic opioids are on the growing list of illicit drugs and pose a serious threat to human health. Some examples of NSOs include AP 237, piperidine, bromomorphine and a variety of 2-benzyl benzimidazole opiates. 2-benzyl benzimidazole opioids, also known as “Nitazenes”, many of which are regulated by the United Nations Commission on Narcotic Drugs, have become internationally prevalent in recent years. Metonitazene is an emerging potent synthetic opioid that first appeared in the recreational drug supply in mid-2020 and began to surge by the end of the decade, raising increasing public health concerns. In this study, we presented a method for the identification of a novel synthetic opioid metonitazene seized for the first time in China by gas chromatography-mass spectrometry (GC-MS) and ultra-high performance liquid chromatography-quadrupole/electrostatic field Orbitrap-MS (UPLC-Q- Exactive Orbitrap-MS). The unknown samples were extracted by methanol, and the supernatants were analyzed by GC-MS and UPLC-Q-Exactive Orbitrap-MS respectively. Isotonitazene was analyzed as a contrast. By GC-MS detection, the main characteristic ion peaks of the mass spectral fragments of unknown component with retention time of 15.167 min were m/z 86.1 (base peak), 58.05, 121.1, 205.05, 236.05 and 248.1. UPLC-Q-Exactive Orbitrap-MS detection showed that the excimer ion peak of unknown component with retention time of 7.35 min was m/z 383.207 7. The main ions of secondary mass spectrum were m/z 310.118 2, 264.125 2, 121.064 9, 100.112 1 and 72.080 8. Due to the lack of reference substance, the target was identified as metonitazene by retrieval of SWGDRUG and Cayman database, comparison of information in reported literatures and analysis of high-resolution mass spectrometry data. The method is simple, rapid and has good application prospect. It can be used in real case detection.

  • Technology and Application
    MENG Yunle, YAN Shengdong, WEI Zhibin, YANG Chaopeng, LI Yang, LI Hao, HE Guanglong
    Forensic Science and Technology. 2024, 49(4): 417-421. https://doi.org/10.16467/j.1008-3650.2023.0077

    A multi-rigid body model was used in this study to simulate and reconstruct two disputed falling methods in practical cases. The MADYMO software was used to simulate and calculate the landing position and force point of the human body when falling from a window, and to analyze the physical parameters of the main parts of the body. The results showed that when the human body fell from a window in a straddle position, the trajectory of the fall was tilted to the side, and the side of the body landed first. When the human body fell from a window in a clinging position, the trajectory of the fall was straight and vertical, and the lower limbs of the body landed first. Combining with the fact that the most severe fractures in the case occurred in the lower limbs and pelvic region, it was suggested that the first landing position of the human body when falling is the lower limbs, which was consistent with the multi-rigid body human body model simulation analysis of the falling process in a clinging position. Therefore, the analysis believes that the human body falling injury in the case is consistent with the human body falling from a window in a clinging position. Based on practical cases, this study used a multi-rigid body model to analyze and reconstruct the process of human falling, providing a new visualization method for the analysis of falling injuries and the auxiliary judgment of falling properties.

  • Technology and Application
    LOU Yandi, ZHENG Qinggeng, JI Chaohao, SONG Ruikun
    Forensic Science and Technology. 2024, 49(4): 422-426. https://doi.org/10.16467/j.1008-3650.2023.0083

    In the process of investigating telecommunication network fraud cases, especially in cases such as click farming, investment and financial management fraud and naked chat, APP and URL forensics analysis are the focus of network-side investigation. Because of the need to realize functions such as chatting, picture uploading and voice calling in the APP involved, the APP developed based on IM framework has become the mainstream, among which NetEase Yunxin IM is the most common third-party IM framework in the current fraud cases. However, as criminals continue to hide their means of committing crimes, for example, encrypting APPs or encrypting chat content end-to-end, direct analysis cannot obtain the key value of IM interface, or only the encrypted garbled code can be obtained, and chat content cannot be viewed. Based on this kind of cases, this paper introduces the principle of IM framework, the encryption technology and decryption method of APPs and chat content. Through in-depth reverse analysis and encryption algorithm analysis of this kind of APPs, the efficiency of clue mining and investigation and evidence collection of single fraud cases can be fully improved, which provides strong support for the detection of related cases.

  • Research Articles
    WANG Guiqiang
    Forensic Science and Technology. 2024, 49(4): 331-339. https://doi.org/10.16467/j.1008-3650.2024.0028

    The forensic paradigm is the scientific theories and methods used in the process of interpreting the findings of forensic examination and forming expert opinion. There is a paradigm shift from the traditional paradigm of categorical source conclusions to the paradigm of evaluative opinion. The traditional forensic paradigm is based on the assumption of feature uniqueness. The traditional paradigm of forensic science has a history of over 100 years of development and application, and has been applied to almost all physical evidence except DNA evidence. After detecting and comparing trace evidence from crime scene and known source sample, examiner will determine whether the features of the trace evidence match features of the sample, and will use threshold decision-making to give opinions on the trace evidence and the sample came from a same source or from different sources. In the traditional paradigm, the process by which examiner forms a categorical source opinions from results of the features is a deductive reasoning process: the major premise is the assumption of the uniqueness of trace features, the minor premise is the results of feature matching (or no-match), and the conclusion is that the trace and sample has same (or different) source. As long as the major and minor premises are true, the categorical opinion on source of the traditional paradigm is correct. However, with the development and maturity of evaluative methods for forensic DNA results, some scholars questioned the lack of empirical proof for the hypothesis of feature uniqueness in the traditional forensic paradigm, and thus believe that deductive reasoning without a major premise of the assumption of the trace features uniqueness has no validity, and therefore, the categorical source opinion in the traditional paradigm lacks a solid scientific foundation.