Yazar "Cakmak, Muhammet" seçeneğine göre listele
Listeleniyor 1 - 3 / 3
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe A Comprehensive Survey on Automatic Detection of Fake News Using Natural Language Processing: Challenges and Limitations(Institute of Electrical and Electronics Engineers Inc., 2024) Saleh, Alhadi Omran; Karaoglan, Kursat Mustafa; Cakmak, MuhammetThe study examines how Natural Language Processing (NLP) can be used to automatically detect fake news and how it can be applied to fact-checking in the disciplines of linguistics, computer science, journalism, and information sciences. By evaluating the efficacy, dependability, and breadth of various NLP algorithms, this study demonstrates the possibilities and limitations of autonomous fake news identification. The importance of striking a balance between the technical form of information and its social-cognitive dimensions was revealed by the study. An overemphasis on the technical components can lead to fragmented comprehension, so it is essential to strike a balance between the technical form of information and its social-cognitive dimensions. In the age of digital self-publishing, the importance of authoritativeness in determining the credibility of information has also been raised as a significant concern. The report emphasizes the need for an integrative approach to prevent the spread of fake news, recommending interdisciplinary collaboration and the ongoing refinement of NLP research methods for future studies. © 2024 IEEE.Öğe Profile Photograph Classification Performance of Deep Learning Algorithms Trained Using Cephalometric Measurements: A Preliminary Study(Mdpi, 2024) Kocakaya, Duygu Nur Cesur; Ozel, Mehmet Birol; Kartbak, Sultan Busra Ay; Cakmak, Muhammet; Sinanoglu, Enver AlperExtraoral profile photographs are crucial for orthodontic diagnosis, documentation, and treatment planning. The purpose of this study was to evaluate classifications made on extraoral patient photographs by deep learning algorithms trained using grouped patient pictures based on cephalometric measurements. Cephalometric radiographs and profile photographs of 990 patients from the archives of Kocaeli University Faculty of Dentistry Department of Orthodontics were used for the study. FH-NA, FH-NPog, FMA and N-A-Pog measurements on patient cephalometric radiographs were carried out utilizing Webceph. 3 groups for every parameter were formed according to cephalometric values. Deep learning algorithms were trained using extraoral photographs of the patients which were grouped according to respective cephalometric measurements. 14 deep learning models were trained and tested for accuracy of prediction in classifying patient images. Accuracy rates of up to 96.67% for FH-NA groups, 97.33% for FH-NPog groups, 97.67% for FMA groups and 97.00% for N-A-Pog groups were obtained. This is a pioneering study where an attempt was made to classify clinical photographs using artificial intelligence architectures that were trained according to actual cephalometric values, thus eliminating or reducing the need for cephalometric X-rays in future applications for orthodontic diagnosis.Öğe Sex Prediction of Hyoid Bone from Computed Tomography Images Using the DenseNet121 Deep Learning Model(Soc Chilena Anatomia, 2024) Bakici, Rukiye Sumeyye; Cakmak, Muhammet; Oner, Zulal; Oner, SerkanThe study aims to demonstrate the success of deep learning methods in sex prediction using hyoid bone. The images of people aged 15-94 years who underwent neck Computed Tomography (CT) were retrospectively scanned in the study. The neck CT images of the individuals were cleaned using the RadiAnt DICOM Viewer (version 2023.1) program, leaving only the hyoid bone. A total of 7 images in the anterior, posterior, superior, inferior, right, left, and right-anterior-upward directions were obtained from a patient's cut hyoid bone image. 2170 images were obtained from 310 hyoid bones of males, and 1820 images from 260 hyoid bones of females. 3990 images were completed to 5000 images by data enrichment. The dataset was divided into 80 % for training, 10 % for testing, and another 10 % for validation. It was compared with deep learning models DenseNet121, ResNet152, and VGG19. An accuracy rate of 87 % was achieved in the ResNet152 model and 80.2 % in the VGG19 model. The highest rate among the classified models was 89 % in the DenseNet121 model. This model had a specificity of 0.87, a sensitivity of 0.90, an F1 score of 0.89 in women, a specificity of 0.90, a sensitivity of 0.87, and an F1 score of 0.88 in men. It was observed that sex could be predicted from the hyoid bone using deep learning methods DenseNet121, ResNet152, and VGG19. Thus, a method that had not been tried on this bone before was used. This study also brings us one step closer to strengthening and perfecting the use of technologies, which will reduce the subjectivity of the methods and support the expert in the decision-making process of sex prediction.