<?xml version="1.0" encoding="UTF-8"?>
<itemContainer xmlns="http://omeka.org/schemas/omeka-xml/v5" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://repository.horizon.ac.id/items/browse?collection=791&amp;output=omeka-xml&amp;page=3" accessDate="2026-04-14T18:14:10+00:00">
  <miscellaneousContainer>
    <pagination>
      <pageNumber>3</pageNumber>
      <perPage>10</perPage>
      <totalResults>26</totalResults>
    </pagination>
  </miscellaneousContainer>
  <item itemId="10542" public="1" featured="1">
    <fileContainer>
      <file fileId="10555">
        <src>https://repository.horizon.ac.id/files/original/d1f8a710f23d295428af206e109e6310.pdf</src>
        <authentication>d8382b96fca7277f51ff0bcbe0ec2097</authentication>
      </file>
    </fileContainer>
    <collection collectionId="791">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112640">
                  <text>Vol 9 No 4 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112696">
                <text>Enhancing Agile Defect Prediction with Optimized Machine Learningand Feature Selection</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112697">
                <text>agile software practices; bug prediction; defect classification; feature selection; metaheuristic optimization</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112698">
                <text>n Agile software development, efficient defect prediction is crucial because of the rapid and iterative nature of the delivery. Conventional methods that rely on source code or commit logs often fail to capture the critical contextual signals necessary for early bug detection. This study proposes a hybrid machine learning framework that leverages enriched contextual features from Jira issue tickets and combines them with optimized feature selection techniques. Various classification models, including Random  Forest,  XGBoost,  CatBoost,  SVM,  and  Transformer,  are  employed  to  predict  defects.  To  further  enhance  model performance, metaheuristic-based feature selection methods such as the Bat Algorithm (BA) and Particle Swarm Optimization (PSO) are applied to reduce dimensionality and improve predictive relevance. Experimental results show that Random Forest with BA optimization achieves the highest performance, with an F1-score of 0.83 and an AUC-ROC of 0.86, outperforming other models.  While  the  Transformer  modeldoes not  surpass  tree-based  algorithms  in  all metrics,  it  shows  high recall and competitive  F1-scores,  making  it  suitable  for  high-sensitivity  applications.  These  findings  highlight  the  importance  of integrating  optimized  machine  learning  models  and  feature  selection  techniques  to  improve  model  robustness,  reduce computational complexity,  and meet the needs of  Agile  development.  This  approach  supports  software teams  in  prioritizing quality assurance tasks, reducing long-term maintenance costs, and optimizing defect management processes</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112699">
                <text>Faiq DhimasWicaksono1*, Daniel Siahaan2</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112700">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6713/1113</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112701">
                <text>Master Program of Technology Management, Interdisciplinary School of Management and Technology, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112702">
                <text>August 18, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112703">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112704">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112705">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112706">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10541" public="1" featured="1">
    <fileContainer>
      <file fileId="10554">
        <src>https://repository.horizon.ac.id/files/original/d3ba11c82a5ed500f8f334a5ec6e6903.pdf</src>
        <authentication>71e5654ff3062bf6cbb79fd2618c6e89</authentication>
      </file>
    </fileContainer>
    <collection collectionId="791">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112640">
                  <text>Vol 9 No 4 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112685">
                <text>Adaptive Stress Prediction with GSR, SMOTE Balancing, and Random Forest Models</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112686">
                <text>GSR sensor; perceived stress scale; random forest;SMOTE balancing; stress detection</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112687">
                <text>Stress  is  a  pervasive  condition  that  affects  mental  health,  productivity,  and  quality  of  life  across  populations.  Traditional methods for stress assessment, such as the Perceived Stress Scale (PSS), rely on retrospective self-reporting and are limited by subjectivity and delayed feedback. To address this gap, this study developed an integrated real-time stress monitoring system combining  Galvanic  Skin  Response  (GSR)  sensors,  Internet  of  Things  (IoT)  technology,  and  machine  learning  algorithms.Primary  GSR  data  were  collected  from  30  participants under  varied  conditions,  supplemented  by  secondary  data  from  the WESAD dataset. A Random Forest classifier was employed to categorize stress into four levels: normal, mild, moderate, and severe.  To  address  class  imbalance,  the  Synthetic  Minority  Over-sampling  Technique  (SMOTE)  was  applied,  leading  to improved  model  robustness.  The  system  achieved  a  cross-validated  classification  accuracy  of  69%,  with  substantial improvements in the detection of moderate and severe stress cases compared to traditional threshold-based methods. A strong agreement (Cohen’s Kappa κ = 0.82) was observed between system predictions and PSS-based stress assessments.Feature importance  analysis  identified  mean  GSR  value  and  Skin  Conductance  Response  (SCR)  amplitude  as  the  most  influential indicators of stress. The system was evaluated for usability, receiving high user ratings in terms of accessibility, simplicity, and  interactivity.  A  simple  Python-based  command-line  interface  (CLI)  was  also  developed  for  real-time  stress  prediction based  on  input  features.This  research  demonstrates  the  feasibility  and  effectiveness  of  combining  physiological  sensing, predictive analytics, and user-friendly interfaces to enable scalable and adaptive stress monitoring. Future developments will focus onintegrating additional physiological modalities and deep learning techniques to enhance predictive performance and personalization in clinical and everyday contexts</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112688">
                <text>Rino Ferdian Surakusumah1*, Rechi Yudha Apza</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112689">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6588/1112</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112690">
                <text>Department of Medical Electronics Engineering Technology, Faculty of Health Technology, Al Insyirah Institut of Health and Technology, Pekanbaru, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112691">
                <text>August 17, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112692">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112693">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112694">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112695">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10540" public="1" featured="1">
    <fileContainer>
      <file fileId="10553">
        <src>https://repository.horizon.ac.id/files/original/53390545989d262a68b7cc14a2f39e4c.pdf</src>
        <authentication>32a4517472107a7a7d016cf15e498a8c</authentication>
      </file>
    </fileContainer>
    <collection collectionId="791">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112640">
                  <text>Vol 9 No 4 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112674">
                <text>Advancing Vehicle Logo Detection with DETRto Handle Small Logos and Low-Quality Images</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112675">
                <text>detection transformers; logo; object detection; vehicle</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112676">
                <text>Image-based  vehicle  logo  detection  is  an  important  component  in  the  implementation  of  vehicle  information  recognition technology, which supports the development of intelligent transportation systems. Vehicle logos, as elements that represent the identities of vehicle brands and models, play a significant role in completing vehicle identity data. The information obtained from this logo can be utilized to solve various traffic problems, such as vehicle document counterfeiting and theft, and for better traffic planning and management purposes. However, the main challenge in developing an accurate logo detection system lies in the wide variety of shapes, sizes, and positions of logos in different types of vehicles. In addition, the generally small size of logos,  especially  on  certain  vehicles,  often  makes  it  difficult  for  computer-based  detection  systems  to  recognize  logos consistently,  thus  affecting  the  overall  performance  of  the  detection  model.  In  this  research,  the  Detection  Transformers (DETR)  method  is  used  to  build  a  vehicle  logo  detection  system  that  focuses  on  small-scale  logo.  The  testing  process  was conducted using the VL-10 dataset, which was specifically designed forvehicle logo detection evaluation. The results show that  the  DETR  model  can  detect  vehicle  logos  very  well,  even  for  small-scale  logos.  The  model  achieved  an  AP50  value  of 0.952, which indicates a high level of accuracy and reliability in detecting the vehicle logo in the dataset used. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112677">
                <text>Rifky Fahrizal Ubaidillah1, Mahmud Dwi Sulistiyo2*, Gamma Kosala3, Ema Rachmawati4, Deny Haryadi5</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112678">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6236/1111</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112679">
                <text>School of Computing, Telkom University, Bandung, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112680">
                <text>August 17. 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112681">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112682">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112683">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112684">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10539" public="1" featured="1">
    <fileContainer>
      <file fileId="10552">
        <src>https://repository.horizon.ac.id/files/original/c2e378a65c2feae559cb6abc819489a8.pdf</src>
        <authentication>df1b3c6126548b31b8b63ee19e81c315</authentication>
      </file>
    </fileContainer>
    <collection collectionId="791">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112640">
                  <text>Vol 9 No 4 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112663">
                <text>Comparing Optimization Algorithms in ANN Models for House Price Prediction in Pekanbaru</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112664">
                <text>AdaDelta;  stochastic  gradient  descent  (SGD);  adaptive  moment  estimation  (Adam);  adaptive sharpness-aware minimization (ASAM);artificial neural network (ANN); house price prediction; optimization; nadam</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112665">
                <text>This study evaluates the performance of five optimization algorithms in Artificial Neural Network (ANN) models for predictinghouse prices in Pekanbaru. The optimizers tested include Adam, AdaDelta, Stochastic Gradient Descent (SGD), Nadam, and Adaptive  Sharpness-Aware  Minimization  (ASAM).  A  total  of  3,149  house  sales  records  were  collected  from  rumah123.com between  January  and  December  2024.  After  cleaning  148  incomplete  entries,  3,001  valid  records  remained.  The  dataset included  seven  features:  price,  location,  number  of  bedrooms,  number  of  bathrooms,  land  area,  building  area,  and  garage capacity, with the location encoded using one-hot encoding. The research involved a literature review, problem formulation, data  acquisition,  preprocessing,  model  development,  and  evaluation.  Model  performance  was  assessed  using  the  Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Root Mean Square Error (RMSE). The results show that SGD consistently achieved the best performance, particularly at a 90:10 train-test split, with the lowest MAPE (1.74%) and MSE (0.3279). Adam and Nadam also performed well, while ASAM had the highest error (MAPE 6.14%). These findings indicate  that  SGD  was  the  most  effective  optimizer  for  this  dataset.  Future  research  should  explore  larger  datasets  and advanced hyperparameter tuning to improve the generalizability of this model</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112666">
                <text>Doni Winarso1*, Edo Arribe2, Syahril3, Aryanto4, Muhardi5, Sharulniza Musa</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112667">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6619/1109</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112668">
                <text>Departmentof Information Systems, Facultyof Computer Science, Universitas Muhammadiyah Riau, Riau, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112669">
                <text>August 17, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112670">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112671">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112672">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112673">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10538" public="1" featured="1">
    <fileContainer>
      <file fileId="10551">
        <src>https://repository.horizon.ac.id/files/original/7700cf7f6ce15eddbc8d3f572d56f89c.pdf</src>
        <authentication>af1942563bda77302ae544944b7bcd6c</authentication>
      </file>
    </fileContainer>
    <collection collectionId="791">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112640">
                  <text>Vol 9 No 4 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112652">
                <text>Benchmarking YOLOv8 Variants with Transfer Learning for Real-Time Detection and Classification of Road Cracks and Potholes</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112653">
                <text>classification;deep learning; road damage detection;transfer learning;YOLOv8</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112654">
                <text>Road damage,  including  potholes  and  cracks,  is  a  significant  issue  frequently  encountered  in  road  infrastructure  in  many regions.  Such  conditions  accelerate  road  degradation,  increase  the  risk  of  traffic  accidents,  and  significantly  increase  the maintenance  andrepair  costs.  Although  several  deep  learning  models  have  been  proposed  for  road  damage  detection,  few studies have systematically compared the performance of lightweight YOLOv8 variants using a consistent dataset. To address this gap, this study proposes a road defect detection and classification model based on the YOLOv8 series, which is enhanced using transfer learning to improve performance and efficiency. The dataset, obtained from Roboflow, comprises 3,846 images categorized into training, validation,and testing sets. Three YOLOv8 variants—YOLOv8n, YOLOv8s, and YOLOv8m—were benchmarked  for  performance.  A  performance  evaluation  was  performed  using  the  metrics  of  precision,  recall,  and  mean Average Precision (mAP). Results show that YOLOv8m achieved thehighest precision (99.00%), recall (98.40%), and mAP (99.50%). In the pothole category, precision reached 98.70% and recall 99.30%; in the crack category, precision was 99.30% and recall 97.60%. The findings demonstrate that YOLOv8, particularly the YOLOv8m variant, is highly effective for real-time road  damage  detection  and  classification,  offering  a  viable  solution  for  intelligent  transportation  systems  and  automated infrastructure monitoring. This research has the potential to revolutionize infrastructure monitoring by enabling scalable, real-time, and cost-effective assessments of road conditions. It minimizes reliance on manual inspections, reduces human errors, and contributes to the development of intelligent transportation systems and predictive maintenance strategie</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112655">
                <text>Dede Kurniadi1*, A. Abdul Latif2,Asri Mulyani3, Hilmi Aulawi</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112656">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6710/1108</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112657">
                <text>Department of Computer Science, Institut Teknologi Garut, Garut, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112658">
                <text> August 15, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112659">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112660">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112661">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112662">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10537" public="1" featured="1">
    <fileContainer>
      <file fileId="10550">
        <src>https://repository.horizon.ac.id/files/original/70924ae0e94b4247adfc83646c937cef.pdf</src>
        <authentication>5d23a602f7c94db10fbe70b0975ac206</authentication>
      </file>
    </fileContainer>
    <collection collectionId="791">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112640">
                  <text>Vol 9 No 4 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112641">
                <text>Real-time Emotion Recognition Using the MobileNetV2 Architecture</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112642">
                <text>facial recognition; deep learning, MobileNetV2, CNN, tensorflow</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112643">
                <text>Facial recognition technology is now advancing quickly and is being used extensively in a number of industries, including banking,   business,   security   systems,   and   human-computer   interface.However,   existing   facial recognition  models  face  significant  challenges  in  real-time  emotion  classification,  particularly  in  terms  of computational  efficiency  and  adaptability  to  varying  environmental  conditions  such  as  lighting  and  occlusion. Addressing  these  challenges,  this  research  proposes  a  lightweight,  yet  effective  deep  learning  model  based  on MobileNetV2 to predict human facial emotions using a camera in real time. The model is trained on the FER-2013 dataset,  which  consists  of  seven  emotion  classes:  anger,  disgust,  fear,  joy,  sadness,  surprise,  and  neutral.  The methodology  includes  deep  learning-based  feature  extraction,  convolutional  neural  networks  (CNN),  and optimization techniques to enhance real-time performance on resource-constrained devices. Experimental results demonstrate  that  the  proposed  model  achieves  a  high  accuracy  of  94.23%,  ensuring  robust  real-time  emotion classification  with  a  significantly  reduced  computational  cost.  Additionally,  the  model  is  validated  using  real-world camera data, confirming its effectiveness beyond static datasets and its applicability in practical real-time scenarios. The findings of this study contribute to advancing efficient emotion recognition systems, enabling their deployment in interactive AI applications, mental health monitoring, and smart environments. Real-world camera data  is  also  used  to  evaluate  the  model,  demonstrating  its  usefulness  in  real-time  applications  and  its  efficacy beyond static datasets. The results of this work advance effective emotion identification systems, making it possible to use them in smart settings, interactive AI applications, and mental health monitoring</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112644">
                <text>Triyani Hendrawati1*,Anindya Apriliyanti Pravitasari2, Nazamuddin3, Riza Fazhriansyah Hermawan4, Satrio Adilia Subekti5, Muhammad Yasyfi</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112645">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6158/1102</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112646">
                <text>Department of Statistics, Faculty of Mathematics and Natural Sciences, Universitas Padjadjaran, Bandung, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112647">
                <text>July 17, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112648">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112649">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112650">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112651">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
</itemContainer>
