<?xml version="1.0" encoding="UTF-8"?>
<itemContainer xmlns="http://omeka.org/schemas/omeka-xml/v5" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://repository.horizon.ac.id/items/browse?collection=791&amp;output=omeka-xml&amp;sort_field=added" accessDate="2026-04-11T06:06:08+00:00">
  <miscellaneousContainer>
    <pagination>
      <pageNumber>1</pageNumber>
      <perPage>10</perPage>
      <totalResults>26</totalResults>
    </pagination>
  </miscellaneousContainer>
  <item itemId="10537" public="1" featured="1">
    <fileContainer>
      <file fileId="10550">
        <src>https://repository.horizon.ac.id/files/original/70924ae0e94b4247adfc83646c937cef.pdf</src>
        <authentication>5d23a602f7c94db10fbe70b0975ac206</authentication>
      </file>
    </fileContainer>
    <collection collectionId="791">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112640">
                  <text>Vol 9 No 4 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112641">
                <text>Real-time Emotion Recognition Using the MobileNetV2 Architecture</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112642">
                <text>facial recognition; deep learning, MobileNetV2, CNN, tensorflow</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112643">
                <text>Facial recognition technology is now advancing quickly and is being used extensively in a number of industries, including banking,   business,   security   systems,   and   human-computer   interface.However,   existing   facial recognition  models  face  significant  challenges  in  real-time  emotion  classification,  particularly  in  terms  of computational  efficiency  and  adaptability  to  varying  environmental  conditions  such  as  lighting  and  occlusion. Addressing  these  challenges,  this  research  proposes  a  lightweight,  yet  effective  deep  learning  model  based  on MobileNetV2 to predict human facial emotions using a camera in real time. The model is trained on the FER-2013 dataset,  which  consists  of  seven  emotion  classes:  anger,  disgust,  fear,  joy,  sadness,  surprise,  and  neutral.  The methodology  includes  deep  learning-based  feature  extraction,  convolutional  neural  networks  (CNN),  and optimization techniques to enhance real-time performance on resource-constrained devices. Experimental results demonstrate  that  the  proposed  model  achieves  a  high  accuracy  of  94.23%,  ensuring  robust  real-time  emotion classification  with  a  significantly  reduced  computational  cost.  Additionally,  the  model  is  validated  using  real-world camera data, confirming its effectiveness beyond static datasets and its applicability in practical real-time scenarios. The findings of this study contribute to advancing efficient emotion recognition systems, enabling their deployment in interactive AI applications, mental health monitoring, and smart environments. Real-world camera data  is  also  used  to  evaluate  the  model,  demonstrating  its  usefulness  in  real-time  applications  and  its  efficacy beyond static datasets. The results of this work advance effective emotion identification systems, making it possible to use them in smart settings, interactive AI applications, and mental health monitoring</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112644">
                <text>Triyani Hendrawati1*,Anindya Apriliyanti Pravitasari2, Nazamuddin3, Riza Fazhriansyah Hermawan4, Satrio Adilia Subekti5, Muhammad Yasyfi</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112645">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6158/1102</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112646">
                <text>Department of Statistics, Faculty of Mathematics and Natural Sciences, Universitas Padjadjaran, Bandung, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112647">
                <text>July 17, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112648">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112649">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112650">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112651">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10538" public="1" featured="1">
    <fileContainer>
      <file fileId="10551">
        <src>https://repository.horizon.ac.id/files/original/7700cf7f6ce15eddbc8d3f572d56f89c.pdf</src>
        <authentication>af1942563bda77302ae544944b7bcd6c</authentication>
      </file>
    </fileContainer>
    <collection collectionId="791">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112640">
                  <text>Vol 9 No 4 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112652">
                <text>Benchmarking YOLOv8 Variants with Transfer Learning for Real-Time Detection and Classification of Road Cracks and Potholes</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112653">
                <text>classification;deep learning; road damage detection;transfer learning;YOLOv8</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112654">
                <text>Road damage,  including  potholes  and  cracks,  is  a  significant  issue  frequently  encountered  in  road  infrastructure  in  many regions.  Such  conditions  accelerate  road  degradation,  increase  the  risk  of  traffic  accidents,  and  significantly  increase  the maintenance  andrepair  costs.  Although  several  deep  learning  models  have  been  proposed  for  road  damage  detection,  few studies have systematically compared the performance of lightweight YOLOv8 variants using a consistent dataset. To address this gap, this study proposes a road defect detection and classification model based on the YOLOv8 series, which is enhanced using transfer learning to improve performance and efficiency. The dataset, obtained from Roboflow, comprises 3,846 images categorized into training, validation,and testing sets. Three YOLOv8 variants—YOLOv8n, YOLOv8s, and YOLOv8m—were benchmarked  for  performance.  A  performance  evaluation  was  performed  using  the  metrics  of  precision,  recall,  and  mean Average Precision (mAP). Results show that YOLOv8m achieved thehighest precision (99.00%), recall (98.40%), and mAP (99.50%). In the pothole category, precision reached 98.70% and recall 99.30%; in the crack category, precision was 99.30% and recall 97.60%. The findings demonstrate that YOLOv8, particularly the YOLOv8m variant, is highly effective for real-time road  damage  detection  and  classification,  offering  a  viable  solution  for  intelligent  transportation  systems  and  automated infrastructure monitoring. This research has the potential to revolutionize infrastructure monitoring by enabling scalable, real-time, and cost-effective assessments of road conditions. It minimizes reliance on manual inspections, reduces human errors, and contributes to the development of intelligent transportation systems and predictive maintenance strategie</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112655">
                <text>Dede Kurniadi1*, A. Abdul Latif2,Asri Mulyani3, Hilmi Aulawi</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112656">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6710/1108</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112657">
                <text>Department of Computer Science, Institut Teknologi Garut, Garut, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112658">
                <text> August 15, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112659">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112660">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112661">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112662">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10539" public="1" featured="1">
    <fileContainer>
      <file fileId="10552">
        <src>https://repository.horizon.ac.id/files/original/c2e378a65c2feae559cb6abc819489a8.pdf</src>
        <authentication>df1b3c6126548b31b8b63ee19e81c315</authentication>
      </file>
    </fileContainer>
    <collection collectionId="791">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112640">
                  <text>Vol 9 No 4 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112663">
                <text>Comparing Optimization Algorithms in ANN Models for House Price Prediction in Pekanbaru</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112664">
                <text>AdaDelta;  stochastic  gradient  descent  (SGD);  adaptive  moment  estimation  (Adam);  adaptive sharpness-aware minimization (ASAM);artificial neural network (ANN); house price prediction; optimization; nadam</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112665">
                <text>This study evaluates the performance of five optimization algorithms in Artificial Neural Network (ANN) models for predictinghouse prices in Pekanbaru. The optimizers tested include Adam, AdaDelta, Stochastic Gradient Descent (SGD), Nadam, and Adaptive  Sharpness-Aware  Minimization  (ASAM).  A  total  of  3,149  house  sales  records  were  collected  from  rumah123.com between  January  and  December  2024.  After  cleaning  148  incomplete  entries,  3,001  valid  records  remained.  The  dataset included  seven  features:  price,  location,  number  of  bedrooms,  number  of  bathrooms,  land  area,  building  area,  and  garage capacity, with the location encoded using one-hot encoding. The research involved a literature review, problem formulation, data  acquisition,  preprocessing,  model  development,  and  evaluation.  Model  performance  was  assessed  using  the  Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Root Mean Square Error (RMSE). The results show that SGD consistently achieved the best performance, particularly at a 90:10 train-test split, with the lowest MAPE (1.74%) and MSE (0.3279). Adam and Nadam also performed well, while ASAM had the highest error (MAPE 6.14%). These findings indicate  that  SGD  was  the  most  effective  optimizer  for  this  dataset.  Future  research  should  explore  larger  datasets  and advanced hyperparameter tuning to improve the generalizability of this model</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112666">
                <text>Doni Winarso1*, Edo Arribe2, Syahril3, Aryanto4, Muhardi5, Sharulniza Musa</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112667">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6619/1109</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112668">
                <text>Departmentof Information Systems, Facultyof Computer Science, Universitas Muhammadiyah Riau, Riau, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112669">
                <text>August 17, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112670">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112671">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112672">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112673">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10540" public="1" featured="1">
    <fileContainer>
      <file fileId="10553">
        <src>https://repository.horizon.ac.id/files/original/53390545989d262a68b7cc14a2f39e4c.pdf</src>
        <authentication>32a4517472107a7a7d016cf15e498a8c</authentication>
      </file>
    </fileContainer>
    <collection collectionId="791">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112640">
                  <text>Vol 9 No 4 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112674">
                <text>Advancing Vehicle Logo Detection with DETRto Handle Small Logos and Low-Quality Images</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112675">
                <text>detection transformers; logo; object detection; vehicle</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112676">
                <text>Image-based  vehicle  logo  detection  is  an  important  component  in  the  implementation  of  vehicle  information  recognition technology, which supports the development of intelligent transportation systems. Vehicle logos, as elements that represent the identities of vehicle brands and models, play a significant role in completing vehicle identity data. The information obtained from this logo can be utilized to solve various traffic problems, such as vehicle document counterfeiting and theft, and for better traffic planning and management purposes. However, the main challenge in developing an accurate logo detection system lies in the wide variety of shapes, sizes, and positions of logos in different types of vehicles. In addition, the generally small size of logos,  especially  on  certain  vehicles,  often  makes  it  difficult  for  computer-based  detection  systems  to  recognize  logos consistently,  thus  affecting  the  overall  performance  of  the  detection  model.  In  this  research,  the  Detection  Transformers (DETR)  method  is  used  to  build  a  vehicle  logo  detection  system  that  focuses  on  small-scale  logo.  The  testing  process  was conducted using the VL-10 dataset, which was specifically designed forvehicle logo detection evaluation. The results show that  the  DETR  model  can  detect  vehicle  logos  very  well,  even  for  small-scale  logos.  The  model  achieved  an  AP50  value  of 0.952, which indicates a high level of accuracy and reliability in detecting the vehicle logo in the dataset used. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112677">
                <text>Rifky Fahrizal Ubaidillah1, Mahmud Dwi Sulistiyo2*, Gamma Kosala3, Ema Rachmawati4, Deny Haryadi5</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112678">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6236/1111</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112679">
                <text>School of Computing, Telkom University, Bandung, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112680">
                <text>August 17. 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112681">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112682">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112683">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112684">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10541" public="1" featured="1">
    <fileContainer>
      <file fileId="10554">
        <src>https://repository.horizon.ac.id/files/original/d3ba11c82a5ed500f8f334a5ec6e6903.pdf</src>
        <authentication>71e5654ff3062bf6cbb79fd2618c6e89</authentication>
      </file>
    </fileContainer>
    <collection collectionId="791">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112640">
                  <text>Vol 9 No 4 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112685">
                <text>Adaptive Stress Prediction with GSR, SMOTE Balancing, and Random Forest Models</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112686">
                <text>GSR sensor; perceived stress scale; random forest;SMOTE balancing; stress detection</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112687">
                <text>Stress  is  a  pervasive  condition  that  affects  mental  health,  productivity,  and  quality  of  life  across  populations.  Traditional methods for stress assessment, such as the Perceived Stress Scale (PSS), rely on retrospective self-reporting and are limited by subjectivity and delayed feedback. To address this gap, this study developed an integrated real-time stress monitoring system combining  Galvanic  Skin  Response  (GSR)  sensors,  Internet  of  Things  (IoT)  technology,  and  machine  learning  algorithms.Primary  GSR  data  were  collected  from  30  participants under  varied  conditions,  supplemented  by  secondary  data  from  the WESAD dataset. A Random Forest classifier was employed to categorize stress into four levels: normal, mild, moderate, and severe.  To  address  class  imbalance,  the  Synthetic  Minority  Over-sampling  Technique  (SMOTE)  was  applied,  leading  to improved  model  robustness.  The  system  achieved  a  cross-validated  classification  accuracy  of  69%,  with  substantial improvements in the detection of moderate and severe stress cases compared to traditional threshold-based methods. A strong agreement (Cohen’s Kappa κ = 0.82) was observed between system predictions and PSS-based stress assessments.Feature importance  analysis  identified  mean  GSR  value  and  Skin  Conductance  Response  (SCR)  amplitude  as  the  most  influential indicators of stress. The system was evaluated for usability, receiving high user ratings in terms of accessibility, simplicity, and  interactivity.  A  simple  Python-based  command-line  interface  (CLI)  was  also  developed  for  real-time  stress  prediction based  on  input  features.This  research  demonstrates  the  feasibility  and  effectiveness  of  combining  physiological  sensing, predictive analytics, and user-friendly interfaces to enable scalable and adaptive stress monitoring. Future developments will focus onintegrating additional physiological modalities and deep learning techniques to enhance predictive performance and personalization in clinical and everyday contexts</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112688">
                <text>Rino Ferdian Surakusumah1*, Rechi Yudha Apza</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112689">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6588/1112</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112690">
                <text>Department of Medical Electronics Engineering Technology, Faculty of Health Technology, Al Insyirah Institut of Health and Technology, Pekanbaru, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112691">
                <text>August 17, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112692">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112693">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112694">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112695">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10542" public="1" featured="1">
    <fileContainer>
      <file fileId="10555">
        <src>https://repository.horizon.ac.id/files/original/d1f8a710f23d295428af206e109e6310.pdf</src>
        <authentication>d8382b96fca7277f51ff0bcbe0ec2097</authentication>
      </file>
    </fileContainer>
    <collection collectionId="791">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112640">
                  <text>Vol 9 No 4 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112696">
                <text>Enhancing Agile Defect Prediction with Optimized Machine Learningand Feature Selection</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112697">
                <text>agile software practices; bug prediction; defect classification; feature selection; metaheuristic optimization</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112698">
                <text>n Agile software development, efficient defect prediction is crucial because of the rapid and iterative nature of the delivery. Conventional methods that rely on source code or commit logs often fail to capture the critical contextual signals necessary for early bug detection. This study proposes a hybrid machine learning framework that leverages enriched contextual features from Jira issue tickets and combines them with optimized feature selection techniques. Various classification models, including Random  Forest,  XGBoost,  CatBoost,  SVM,  and  Transformer,  are  employed  to  predict  defects.  To  further  enhance  model performance, metaheuristic-based feature selection methods such as the Bat Algorithm (BA) and Particle Swarm Optimization (PSO) are applied to reduce dimensionality and improve predictive relevance. Experimental results show that Random Forest with BA optimization achieves the highest performance, with an F1-score of 0.83 and an AUC-ROC of 0.86, outperforming other models.  While  the  Transformer  modeldoes not  surpass  tree-based  algorithms  in  all metrics,  it  shows  high recall and competitive  F1-scores,  making  it  suitable  for  high-sensitivity  applications.  These  findings  highlight  the  importance  of integrating  optimized  machine  learning  models  and  feature  selection  techniques  to  improve  model  robustness,  reduce computational complexity,  and meet the needs of  Agile  development.  This  approach  supports  software teams  in  prioritizing quality assurance tasks, reducing long-term maintenance costs, and optimizing defect management processes</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112699">
                <text>Faiq DhimasWicaksono1*, Daniel Siahaan2</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112700">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6713/1113</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112701">
                <text>Master Program of Technology Management, Interdisciplinary School of Management and Technology, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112702">
                <text>August 18, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112703">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112704">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112705">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112706">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10543" public="1" featured="1">
    <fileContainer>
      <file fileId="10556">
        <src>https://repository.horizon.ac.id/files/original/7932b685dad63ff2c3df3e69c47e7408.pdf</src>
        <authentication>d0c3149a92ed9f68646e76f39543ee98</authentication>
      </file>
    </fileContainer>
    <collection collectionId="791">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112640">
                  <text>Vol 9 No 4 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112707">
                <text>Handling Imbalance in Javanese Manuscript Character Dataset using Skeleton-based Balancing Generative Adversarial Networks</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112708">
                <text>character  classification;  data  imbalance;  generative  adversarial  networks;  javanese  manuscript;  skeleton-based generation</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112709">
                <text>Javanese  script  is  an  important  part  of Indonesia’s cultural heritage, representing cultural values from the past. However, recognizing and classifying Javanese characters within manuscripts is challenging due to the limited availability of data anduneven  distribution  of  character  classes.  The decline  in  formal  use  of  Javanese  script  has  drastically  reduced  the  pool  of manuscript samples, causing certain characters to appear rarely and skewing class frequencies. Existing methods that utilize Generative Adversarial Networks (GANs) attempt to address this problem. However, they often struggle to generate characters that are both consistent and visually accurate in terms of structural details. To address these issues, this study introducesa skeleton-based  balancing  GAN  (SkelBAGAN),  which  improves  the  structural  details  of  the  previous  method  for  generating characters.  The  proposed  method  introduces  three  main  enhancements:  (i)  a  layer  for  extracting  the  character  skeleton structure, (ii) an optimized pretrained network using an autoencoder for learning the skeleton distribution, and (iii) refinement of the evaluation function, preserving both the distribution and structural fidelity in the adversarial process. The performance of the proposed model is evaluated against previous methods using the Fréchet Inception Distance (FID) to assess distribution quality and the Structural Similarity Index Measure (SSIM) to evaluate structural fidelity. The results indicate that the proposed methods  outperform  previous  methods  in  balancing  the  FID  and  SSIM  metrics.The  integration  of  all  enhancements  in SkelBAGAN achieves the lowest FID, indicating improved generative quality while maintaining competitive SSIM values. The qualitative  study  indicates  that  SkelBAGAN  outperforms  previous  methods  in  character  generation.  These  results  highlight how  the  skeleton-based  improvement  of  the  quality  of  generated  characters  enhances  the  recognition  performance  for underrepresented  Javanese  characters  in  imbalanced  datasets.  Ultimately,  this  work  contributes  to  the  broader  effort  to preserve the Javanese script as a vital element of Indonesia’s cultural identity</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112710">
                <text>Muhammad ‘Arif Faizin1, Nanik Suciati2*,Chastine Fatichah</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112711">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6572/1121</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112712">
                <text>Departmentof Informatics, Faculty of Intelligent Electrical and Informatics Technology</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112713">
                <text>August 18, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112714">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112715">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112716">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112717">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10544" public="1" featured="1">
    <fileContainer>
      <file fileId="10557">
        <src>https://repository.horizon.ac.id/files/original/2fd75f7d73ea77b6d1b79a72e5d3d819.pdf</src>
        <authentication>bf4ac6860c92e1b88f140fdbb7aa33e2</authentication>
      </file>
    </fileContainer>
    <collection collectionId="791">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112640">
                  <text>Vol 9 No 4 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112718">
                <text>Sonified Cryptography: Secure Text Encoding with DNA and Non-Speech Audio</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112719">
                <text>cryptography; dual layer encryption; DNAencryption; information security; fast fourier transforms; non speech audification</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112720">
                <text>The increasing demand for data security in digital communication, particularly in high-risk sectors like defense, has led to the exploration of innovative encryption approaches. This study presents a dual-layer encryption model that enhances information concealment by integrating DNA-based cryptography with audio signal encoding. Initially, plaintext is converted into binary and obfuscated using XOR operations with randomly generated DNA sequences. The resulting DNA nucleotide sequences (A, G, C, T) form the first layer of encryption. In the second layer, these sequences are audified by mapping each nucleotide to a specific  frequency,  thereby  transforming  the  encrypted  data  into  non-speech  audio  signals.  To  evaluate  the  integrity  and uniqueness of the encryption-decryption process, Fast Fourier Transform (FFT)-based cross-correlation is applied, comparing the original and recovered audio signals. The proposed method is implemented in MATLAB and tested on various input strings. Results demonstrate significant improvements in encryption speed and security, with the added benefit of imperceptibility in audio  form.  The  method  outperforms existing  DNA-based  techniques  in  terms  of  computational efficiency  and  resistance  to brute-force attacks. This hybrid cryptographic technique offers a promising solution for secure, covert data transmission in sensitive applications</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112721">
                <text>Chandrasekaran Saravanakumar1*, Neelamegam Subhashini</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112722">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6591/1115</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112723">
                <text>Departmentof Electronics and Communication Engineering, SRM Valliammai Engineering College, Potheri, India</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112724">
                <text>August 18, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112725">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112726">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112727">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112728">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10545" public="1" featured="1">
    <fileContainer>
      <file fileId="10558">
        <src>https://repository.horizon.ac.id/files/original/9567351903d6d4279db4e4ad75e11249.pdf</src>
        <authentication>221908a12d5d9fea1822a95e449cbfa5</authentication>
      </file>
    </fileContainer>
    <collection collectionId="791">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112640">
                  <text>Vol 9 No 4 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112729">
                <text>Open-Set Recognition for Potato Leaf Disease Identification Using OpenMax</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112730">
                <text>computer vision; Open-Set Recognition; OpenMax; potato leaf diseases; Xception</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112731">
                <text>Traditional methods for identifying potato leaf diseases rely on manual visual inspection, which is prone to human error and inefficiencies.  Although  machine  learning  models  have  improved  automation,  conventional  closed-set  classifiers  fail  to recognize unknown diseases outside their training scope, limiting real-world applicability. This study addresses this gap by implementing  Open-Set  Recognition  (OSR)  using  the  OpenMax  framework  to  classify  known  potato  leaf  diseases  while effectively   rejecting   unknown   pathologies.   By   leveraging   the   Xception   architecture   with   dual   learning   schedulers (ReduceLROnPlateau and StepLR), we optimized the OpenMax parameters, including distance metrics (Euclidean, Eucos) and rejection thresholds. After rigorous tuning, the model achieved 86.8% accuracy and 86.4% F1-score under an openness score of  18.3%,  with  optimal  performance  using  Euclidean  distance  and  a  0.95  threshold.  The  results  demonstrate  robust discrimination between known classes (potato late blight, early blight, healthy leaves) and visually similar unknown classes (e.g.,  tomato  diseases,  healthy  bell  peppers).  This  study  enhances  AI-driven  agricultural  diagnostics  by  bridging  the  gap between closed-set precision and open-set practicality, offering a scalable solution for real-world disease identification where novel pathogens may emerge</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112732">
                <text>Ike Verawati1*, Mambaul Hisam2, Yoga Pristyanto</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112733">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6525/1116</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112734">
                <text>Informatika, Fakultas Ilmu Komputer, Universitas Amikom Yogyakarta, Sleman, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112735">
                <text>August 18, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112736">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112737">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112738">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112739">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10546" public="1" featured="1">
    <fileContainer>
      <file fileId="10559">
        <src>https://repository.horizon.ac.id/files/original/822eaed1cde56cd5865b617eef986940.pdf</src>
        <authentication>09d3b76c1a7b21796a0f1046be47d056</authentication>
      </file>
    </fileContainer>
    <collection collectionId="791">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112640">
                  <text>Vol 9 No 4 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112740">
                <text>Improving the Accuracy of Tourism Recommendation System Based on Neural Collaborative Filtering</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112741">
                <text>neural collaborative filtering; rating; recommendation system; review; tourism</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112742">
                <text>This study proposes a Neural Collaborative Filtering (NCF) model for tourism recommendation systems by integrating user ratings and review data. This model was developed to overcome the limitations of conventional recommendation systems that rely  solely  on  numerical  data,  by  adding  contextual  information  from  user  reviews  to  improve  the  accuracy  of  preference prediction. The development process includes data preprocessing, conversion of text reviews into numerical representations using embedding techniques, and the application of NCF models with various parameter configurations. Experimental results show that the NCF model that combines rating and review data produces the best performance with Root mean Square Error (RMSE) values of 0.892, Hit Ratio at 10( HR@10)  of 0.735, and Normalized Discounted Cumulative Gain at 10 (NDCG@10)  of 0.629, outperforming models that only use one type of data. These results demonstrate that combining numerical and textual information  can  improve  the  model's  understanding  of  user  preferences,  resulting  in  more  relevant  tourist  destination recommendations. These findings contribute to the development of artificial intelligence-based recommendation systems in the tourism sector.(,)</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112743">
                <text>Renita Astri1*, Lai Po Hung2, Suaini Binti Sura3, Ahmad Kamal4</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112744">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6516/1119</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112745">
                <text>Sistem Informasi, Fakultas Farmasi Sains dan Teknologi, Universitas Dharma Andalas, Padang, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112746">
                <text>August 20, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112747">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112748">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112749">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112750">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
</itemContainer>
