<?xml version="1.0" encoding="UTF-8"?>
<itemContainer xmlns="http://omeka.org/schemas/omeka-xml/v5" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://repository.horizon.ac.id/items/browse?collection=791&amp;output=omeka-xml&amp;sort_field=Dublin+Core%2CTitle" accessDate="2026-04-11T01:05:02+00:00">
  <miscellaneousContainer>
    <pagination>
      <pageNumber>1</pageNumber>
      <perPage>10</perPage>
      <totalResults>26</totalResults>
    </pagination>
  </miscellaneousContainer>
  <item itemId="10562" public="1" featured="1">
    <fileContainer>
      <file fileId="10575">
        <src>https://repository.horizon.ac.id/files/original/cc95ccbc3deb4e32caa81c3e15d950e8.pdf</src>
        <authentication>1767b98f8de8b2392a7debd3974f23fb</authentication>
      </file>
    </fileContainer>
    <collection collectionId="791">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112640">
                  <text>Vol 9 No 4 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112917">
                <text>A New Triple-Weighted K-Nearest Neighbor Algorithm for Tomato Maturity Classification</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112918">
                <text>DW-KNN; HSV; KNN; TW-KNN; W-KNN</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112919">
                <text>As climatic products, tomatoes are highly sensitive to harvesting and processing. The sorting of tomatoes can be significantly improved by utilizing Hue Saturation Value (HSV) color features that are classified using neighboring algorithms, such as K-Nearest Neighbor (KNN), Weighted K-Nearest Neighbor (W-KNN), and DW-KNN. However, the DW-KNN algorithm does not consider  the  relative  relationship  between  the  farthest,  nearest,  and  surrounding  neighbors,  which  may  impact  the classification accuracy, particularly in datasets with uneven distributions. This study proposes a Triple Weighted K-Nearest Neighbor (TW-KNN) algorithm for tomato image classification. This algorithm effectively handles the problem of sensitivity and  outliers  in  the  data  distribution  and  considers  the  relationship  between  neighboring  distances.  The  classification  data consisted of 400 tomato images with five maturity levels divided into training and testing sets using k-fold cross-validation. Tests were conducted using several variations of parameter k, namely 4, 6, 9, and 15, to evaluate the classification performance. The results  show  that  the  proposed  TW-KNN  algorithm  consistently  outperforms  other  methods  by  producing  better classification  results.  This  is  demonstrated  by  an  accuracy  rate  of  95.52%  across  different  values  of  k.  The  superior performance of the TW-KNN highlights its ability to provide robust and stable classification results compared to conventional KNN variants. This finding indicates that the TW-KNN is more effective in consistently classifying tomato fruits, making it a promising approach for automated fruit sorting applications</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112920">
                <text>Lidya Ningsih1*, Arif Mudi Priyatno2, Addini Yusmar3</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112921">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6441/1128</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112922">
                <text>Department of Digital Business, Faculty of Economics and Business, Universitas Pahlawan Tuanku Tambusai, Kampar, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112923">
                <text> August 29, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112924">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112925">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112926">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112927">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10541" public="1" featured="1">
    <fileContainer>
      <file fileId="10554">
        <src>https://repository.horizon.ac.id/files/original/d3ba11c82a5ed500f8f334a5ec6e6903.pdf</src>
        <authentication>71e5654ff3062bf6cbb79fd2618c6e89</authentication>
      </file>
    </fileContainer>
    <collection collectionId="791">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112640">
                  <text>Vol 9 No 4 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112685">
                <text>Adaptive Stress Prediction with GSR, SMOTE Balancing, and Random Forest Models</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112686">
                <text>GSR sensor; perceived stress scale; random forest;SMOTE balancing; stress detection</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112687">
                <text>Stress  is  a  pervasive  condition  that  affects  mental  health,  productivity,  and  quality  of  life  across  populations.  Traditional methods for stress assessment, such as the Perceived Stress Scale (PSS), rely on retrospective self-reporting and are limited by subjectivity and delayed feedback. To address this gap, this study developed an integrated real-time stress monitoring system combining  Galvanic  Skin  Response  (GSR)  sensors,  Internet  of  Things  (IoT)  technology,  and  machine  learning  algorithms.Primary  GSR  data  were  collected  from  30  participants under  varied  conditions,  supplemented  by  secondary  data  from  the WESAD dataset. A Random Forest classifier was employed to categorize stress into four levels: normal, mild, moderate, and severe.  To  address  class  imbalance,  the  Synthetic  Minority  Over-sampling  Technique  (SMOTE)  was  applied,  leading  to improved  model  robustness.  The  system  achieved  a  cross-validated  classification  accuracy  of  69%,  with  substantial improvements in the detection of moderate and severe stress cases compared to traditional threshold-based methods. A strong agreement (Cohen’s Kappa κ = 0.82) was observed between system predictions and PSS-based stress assessments.Feature importance  analysis  identified  mean  GSR  value  and  Skin  Conductance  Response  (SCR)  amplitude  as  the  most  influential indicators of stress. The system was evaluated for usability, receiving high user ratings in terms of accessibility, simplicity, and  interactivity.  A  simple  Python-based  command-line  interface  (CLI)  was  also  developed  for  real-time  stress  prediction based  on  input  features.This  research  demonstrates  the  feasibility  and  effectiveness  of  combining  physiological  sensing, predictive analytics, and user-friendly interfaces to enable scalable and adaptive stress monitoring. Future developments will focus onintegrating additional physiological modalities and deep learning techniques to enhance predictive performance and personalization in clinical and everyday contexts</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112688">
                <text>Rino Ferdian Surakusumah1*, Rechi Yudha Apza</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112689">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6588/1112</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112690">
                <text>Department of Medical Electronics Engineering Technology, Faculty of Health Technology, Al Insyirah Institut of Health and Technology, Pekanbaru, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112691">
                <text>August 17, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112692">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112693">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112694">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112695">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10540" public="1" featured="1">
    <fileContainer>
      <file fileId="10553">
        <src>https://repository.horizon.ac.id/files/original/53390545989d262a68b7cc14a2f39e4c.pdf</src>
        <authentication>32a4517472107a7a7d016cf15e498a8c</authentication>
      </file>
    </fileContainer>
    <collection collectionId="791">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112640">
                  <text>Vol 9 No 4 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112674">
                <text>Advancing Vehicle Logo Detection with DETRto Handle Small Logos and Low-Quality Images</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112675">
                <text>detection transformers; logo; object detection; vehicle</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112676">
                <text>Image-based  vehicle  logo  detection  is  an  important  component  in  the  implementation  of  vehicle  information  recognition technology, which supports the development of intelligent transportation systems. Vehicle logos, as elements that represent the identities of vehicle brands and models, play a significant role in completing vehicle identity data. The information obtained from this logo can be utilized to solve various traffic problems, such as vehicle document counterfeiting and theft, and for better traffic planning and management purposes. However, the main challenge in developing an accurate logo detection system lies in the wide variety of shapes, sizes, and positions of logos in different types of vehicles. In addition, the generally small size of logos,  especially  on  certain  vehicles,  often  makes  it  difficult  for  computer-based  detection  systems  to  recognize  logos consistently,  thus  affecting  the  overall  performance  of  the  detection  model.  In  this  research,  the  Detection  Transformers (DETR)  method  is  used  to  build  a  vehicle  logo  detection  system  that  focuses  on  small-scale  logo.  The  testing  process  was conducted using the VL-10 dataset, which was specifically designed forvehicle logo detection evaluation. The results show that  the  DETR  model  can  detect  vehicle  logos  very  well,  even  for  small-scale  logos.  The  model  achieved  an  AP50  value  of 0.952, which indicates a high level of accuracy and reliability in detecting the vehicle logo in the dataset used. </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112677">
                <text>Rifky Fahrizal Ubaidillah1, Mahmud Dwi Sulistiyo2*, Gamma Kosala3, Ema Rachmawati4, Deny Haryadi5</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112678">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6236/1111</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112679">
                <text>School of Computing, Telkom University, Bandung, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112680">
                <text>August 17. 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112681">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112682">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112683">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112684">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10554" public="1" featured="1">
    <fileContainer>
      <file fileId="10567">
        <src>https://repository.horizon.ac.id/files/original/43fee82eedfb9fcc97aa2fc07ef577b7.pdf</src>
        <authentication>5d23a602f7c94db10fbe70b0975ac206</authentication>
      </file>
    </fileContainer>
    <collection collectionId="791">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112640">
                  <text>Vol 9 No 4 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112828">
                <text>Analysis of the Impact of Backpropagation Hyperparameter Optimization on Heart Disease PredictionModels</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112829">
                <text>backpropagation neural network (BPNN); early diagnosis; heart disease prediction; hyperparameter optimization; machine learning</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112830">
                <text>Heart disease represents a significant global health concern, underscoring the importance of early and accurate predictive models  to  minimize  complications  and improve  patient  prognosis.  The  Backpropagation  Neural  Network  (BPNN)  has  been widely  utilized  for  heart  disease  prediction;  however,  its  effectiveness  is  highly  dependent  on  the  selection  of  appropriate hyperparameters, including the number of neurons, activation function, optimizer, and batch size. In this study, the influence of  hyperparameter  optimization  on  BPNN  performance  was  investigated.  A  baseline  BPNN  model  was  evaluated  alongside an  optimized  counterpart  in  which  key  hyperparameters  had  been  systematically  fine-tuned  to  improve  both  predictive accuracy  and  model  stability.  Both  models  were  trained  and  validated  using  an  identical  dataset,  and  their  performances were  assessed  based  on  Accuracy,  Precision,  Recall,  Mean  Squared  Error  (MSE),  and  Mean Absolute  Error  (MAE).  The optimized  model  demonstrated  marginally  higher  accuracy  (99.11%  compared  to  99.09%)  and  slightly  lower  error  rates (MSE  and  MAE  of  0.0089  versus  0.0091).  Moreover,  it  achieved  superior  precision,  indicating  enhanced  reliability  incorrectly  identifying  heart  disease  cases.  While  the  performance  improvement  was  relatively  small,  the  optimized  model exhibited greater consistency and balance. These results emphasize the critical role of hyperparameter tuning in enhancing the predictive capability of neural network models in medical applications. The study contributes to the advancement of more accurate  and  dependable  AI-based  tools  for  early  heart  disease  diagnosis.  Future  research  may  benefit  from  employing advanced optimization strategies such as Bayesian Optimization or Genetic Algorithms and leveraging larger, more diverse datasets to improve generalizability.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112831">
                <text>Nita Syahputri1*, Putrama Alkhairi2, Enok Tuti Alawiah</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112832">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6473/1110</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112833">
                <text>Faculty of Engineering and Computer Science, Department of Information Systems, Universitas Potensi Utama,Medan,Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112834">
                <text>August 17, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112835">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112836">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112837">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112838">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10538" public="1" featured="1">
    <fileContainer>
      <file fileId="10551">
        <src>https://repository.horizon.ac.id/files/original/7700cf7f6ce15eddbc8d3f572d56f89c.pdf</src>
        <authentication>af1942563bda77302ae544944b7bcd6c</authentication>
      </file>
    </fileContainer>
    <collection collectionId="791">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112640">
                  <text>Vol 9 No 4 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112652">
                <text>Benchmarking YOLOv8 Variants with Transfer Learning for Real-Time Detection and Classification of Road Cracks and Potholes</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112653">
                <text>classification;deep learning; road damage detection;transfer learning;YOLOv8</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112654">
                <text>Road damage,  including  potholes  and  cracks,  is  a  significant  issue  frequently  encountered  in  road  infrastructure  in  many regions.  Such  conditions  accelerate  road  degradation,  increase  the  risk  of  traffic  accidents,  and  significantly  increase  the maintenance  andrepair  costs.  Although  several  deep  learning  models  have  been  proposed  for  road  damage  detection,  few studies have systematically compared the performance of lightweight YOLOv8 variants using a consistent dataset. To address this gap, this study proposes a road defect detection and classification model based on the YOLOv8 series, which is enhanced using transfer learning to improve performance and efficiency. The dataset, obtained from Roboflow, comprises 3,846 images categorized into training, validation,and testing sets. Three YOLOv8 variants—YOLOv8n, YOLOv8s, and YOLOv8m—were benchmarked  for  performance.  A  performance  evaluation  was  performed  using  the  metrics  of  precision,  recall,  and  mean Average Precision (mAP). Results show that YOLOv8m achieved thehighest precision (99.00%), recall (98.40%), and mAP (99.50%). In the pothole category, precision reached 98.70% and recall 99.30%; in the crack category, precision was 99.30% and recall 97.60%. The findings demonstrate that YOLOv8, particularly the YOLOv8m variant, is highly effective for real-time road  damage  detection  and  classification,  offering  a  viable  solution  for  intelligent  transportation  systems  and  automated infrastructure monitoring. This research has the potential to revolutionize infrastructure monitoring by enabling scalable, real-time, and cost-effective assessments of road conditions. It minimizes reliance on manual inspections, reduces human errors, and contributes to the development of intelligent transportation systems and predictive maintenance strategie</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112655">
                <text>Dede Kurniadi1*, A. Abdul Latif2,Asri Mulyani3, Hilmi Aulawi</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112656">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6710/1108</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112657">
                <text>Department of Computer Science, Institut Teknologi Garut, Garut, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112658">
                <text> August 15, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112659">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112660">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112661">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112662">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10552" public="1" featured="1">
    <fileContainer>
      <file fileId="10565">
        <src>https://repository.horizon.ac.id/files/original/dea3af72cfb743cc1e7a642c20ec2cfc.pdf</src>
        <authentication>06ac7b61fbfa02929dacd1ce61cfc785</authentication>
      </file>
    </fileContainer>
    <collection collectionId="791">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112640">
                  <text>Vol 9 No 4 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112806">
                <text>BERT Model Fine-tuned for Scientific Document Classification and Recommendation</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112807">
                <text>BERT; cosine similarity; document classification; fine-tuning; recommendation system</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112808">
                <text>The increasing number of academic documents requires efficient and accurate classification and recommendation systems to assist in retrieving relevant information. This system is built using the "bert-base-uncased” model from Hugging Face, which has  been  fine-tuned  to  improve  the  classification  accuracy  and  relevance  of  document  recommendations.  The  dataset  used consists of 2.000 academic documents in the field of computer science, with features including titles, abstracts, and keywords, which were combined into a single input for the model. Document similarity is measured using cosine similarity, resulting in recommendations  based  on  semantic  proximity.  Unlike  traditional  approaches,  which  rely  primarily  on  word  frequency  or surface-level matching, the proposed method leverages BERT’s contextual embeddings to capture deeper semantic meanings and relationships between documents. This allows for more accurate classification and more context-aware recommendations. Evaluation  results  show  that  the  best  model  configuration  (learning  rate  3e-5,  batch  size  32,  optimizer  AdamW)  achieved 89.5%  training  accuracy  and  an  F1-score  of  0.8947,  while  testing  yielded  91%  accuracy  and  90%  F1-score.  The recommendation system consistently produced Precision@k values above 92% for k between 5 and 30, with Recall@k reaching 1.0 as k increased. These results indicate that the system not only performs reliably in classifying complex academic texts but also effectively recommends contextually relevant documents. This integrated approach shows strong potential for enhancing academic document retrieval and supports the development of semantically aware information management systems</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112809">
                <text>Muhammad Deagama Surya Antariksa1*, Aris Sugiharto2, Bayu Surarso</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112810">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6789/1106</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112811">
                <text>Master of Information Systems, Postgraduate School, Universitas Diponegoro, Semarang, IndonesiA</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112812">
                <text>August 13, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112813">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112814">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112815">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112816">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10550" public="1" featured="1">
    <fileContainer>
      <file fileId="10563">
        <src>https://repository.horizon.ac.id/files/original/00d880ceba21f33035e9e707d2306f98.pdf</src>
        <authentication>0c30b901a3b06fa316a384d5b8839409</authentication>
      </file>
    </fileContainer>
    <collection collectionId="791">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112640">
                  <text>Vol 9 No 4 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112784">
                <text>Comparative Performance of ResNet Architectures for Toraja Carving Image Classification with Data Augmentation</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112785">
                <text>resnet; classification; toraja carving; data augmentation; cnn</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112786">
                <text>The complexity of the motifs and large number of different patterns make the classification of Toraja carvings challenging. The objective  of  this  study  is  to  develop  a  Convolutional  Neural  Network  automatic  classification  model  using  a  comparative analysisof the performance of three ResNet architectures. Data augmentation techniques were used to enrich the diversity of the  training  samples  and  improve  the  robustness  of  the  model.  The  experimental  results  showed  that  ResNet101V2  had  the highest  validation  accuracy,  which  was  greater  than  97%,  followed  by  ResNet50V2  with  more  than  96%,  and  finally, ResNet152V2 with more than 94.74%. These test results indicate that the ResNet101V2 architecture has a better classification performance for complex motifs, with agood balance between precision and recall. However, the confusion matrix and per-class  performance  metrics  indicated  that  motifs  with  high  similarity,  such  as  Paqdon-Bolu  and  Paqtedong,  remained challenging.  This  study  demonstrated  that  deeper  CNN  architectures  and  data  augmentation  techniques  are  effective  in improving  the  classification  accuracy  of  complex  carving  patterns.  Further  research  should  explore  hybrid  or  advanced augmentation methods to improve the overall robustness and accuracy of the model</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112787">
                <text>Herman1*, Muhammad Akbar2, Haidawati Nasir3, Herdianti4, Huzain Azis5, Lilis Nur Hayati6</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112788">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/issue/view/65</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112789">
                <text>Information System, FacultyComputer Science, Universitas Muslim Indonesia, Makassar, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112790">
                <text>August 9, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112791">
                <text>fajar bagus w</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112792">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112793">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112794">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10539" public="1" featured="1">
    <fileContainer>
      <file fileId="10552">
        <src>https://repository.horizon.ac.id/files/original/c2e378a65c2feae559cb6abc819489a8.pdf</src>
        <authentication>df1b3c6126548b31b8b63ee19e81c315</authentication>
      </file>
    </fileContainer>
    <collection collectionId="791">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112640">
                  <text>Vol 9 No 4 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112663">
                <text>Comparing Optimization Algorithms in ANN Models for House Price Prediction in Pekanbaru</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112664">
                <text>AdaDelta;  stochastic  gradient  descent  (SGD);  adaptive  moment  estimation  (Adam);  adaptive sharpness-aware minimization (ASAM);artificial neural network (ANN); house price prediction; optimization; nadam</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112665">
                <text>This study evaluates the performance of five optimization algorithms in Artificial Neural Network (ANN) models for predictinghouse prices in Pekanbaru. The optimizers tested include Adam, AdaDelta, Stochastic Gradient Descent (SGD), Nadam, and Adaptive  Sharpness-Aware  Minimization  (ASAM).  A  total  of  3,149  house  sales  records  were  collected  from  rumah123.com between  January  and  December  2024.  After  cleaning  148  incomplete  entries,  3,001  valid  records  remained.  The  dataset included  seven  features:  price,  location,  number  of  bedrooms,  number  of  bathrooms,  land  area,  building  area,  and  garage capacity, with the location encoded using one-hot encoding. The research involved a literature review, problem formulation, data  acquisition,  preprocessing,  model  development,  and  evaluation.  Model  performance  was  assessed  using  the  Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Root Mean Square Error (RMSE). The results show that SGD consistently achieved the best performance, particularly at a 90:10 train-test split, with the lowest MAPE (1.74%) and MSE (0.3279). Adam and Nadam also performed well, while ASAM had the highest error (MAPE 6.14%). These findings indicate  that  SGD  was  the  most  effective  optimizer  for  this  dataset.  Future  research  should  explore  larger  datasets  and advanced hyperparameter tuning to improve the generalizability of this model</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112666">
                <text>Doni Winarso1*, Edo Arribe2, Syahril3, Aryanto4, Muhardi5, Sharulniza Musa</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112667">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6619/1109</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112668">
                <text>Departmentof Information Systems, Facultyof Computer Science, Universitas Muhammadiyah Riau, Riau, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112669">
                <text>August 17, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112670">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112671">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112672">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112673">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10553" public="1" featured="1">
    <fileContainer>
      <file fileId="10566">
        <src>https://repository.horizon.ac.id/files/original/8364a522492804457d88a8c2821bb19b.pdf</src>
        <authentication>83736f776156dca348cd625a6301e1e6</authentication>
      </file>
    </fileContainer>
    <collection collectionId="791">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112640">
                  <text>Vol 9 No 4 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112817">
                <text>Deep Learning-Based Visualization of Network Threat Patterns Using GAN-Generated Infographic</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112818">
                <text>explainable AI; frechet inception distance (FID); generative adversarial network (GAN); network security; threat visualization</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112819">
                <text>Despite the growing sophistication of cyberattacks, current network traffic analysis tools often lack intuitive visual support, limiting human analysts’ ability to interpret complex threat behaviors. To address this gap, this study proposes a novel deep learning-based visualization framework using a Deep Convolutional Generative Adversarial Network (DCGAN) to synthesize threat-specific  infographics  from  structured  numerical  features  in  the  CICIDS2017  dataset.  Unlike  conventional  methods, such as PCA or static dashboards, which often result in abstract or non-adaptive visuals, our approach generates class-distinct grayscale images that preserve the behavioral patterns of various attacks, includingdenial-of-service, brute force, and port scanning. The preprocessing pipeline reshapes the selected flow-based features into 28×28 matrices to train the generative model. Evaluation using the Frechet Inception Distance (FID) yielded a score of 28.4, whereas a CNN classifier trained on the  generated  images  achieved  91.2%  accuracy,  confirming  visual  fidelity  and  semantic  integrity.  Additionally,  a  panel  of human experts rated the interpretability of the generated images at 4.3 out of 5.0. These findings demonstrate that generative visualization can enhance human-centered threat analysis by bridging raw data with interpretable imagery, thereby offering a scalable and explainable approach for integrating AI into real-time security workflows</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112820">
                <text>Mars Caroline Wibowo1*, Iwan Setyawan2, Adi Setiawan3, Irwan Sembiring</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112821">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6717/1107</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112822">
                <text>Departmentof Visual Communication Design, Faculty of Academic Studies, Universitas Sains dan Teknologi Komputer, Semarang, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112823">
                <text>August 15, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112824">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112825">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112826">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112827">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10549" public="1" featured="1">
    <fileContainer>
      <file fileId="10562">
        <src>https://repository.horizon.ac.id/files/original/fdaf9ca8337f0a4e1457c5dbe5d17eef.pdf</src>
        <authentication>2a39d0eb397b9976cb4d5fadd7b1b594</authentication>
      </file>
    </fileContainer>
    <collection collectionId="791">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112640">
                  <text>Vol 9 No 4 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112773">
                <text>DiG-MFV: Dual-integrated Graph for Multilingual Fact Verification</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112774">
                <text>fact verification; graph fusion; LaBSE; multilingual model; mBERT; political claim; XLM-R</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112775">
                <text>The proliferation of misinformation in political domains, especially across multilingual platforms, presents a major challenge to  maintaining  public  information  integrity.  Existing  models  often  fail  to  effectively  verify  claims  when  the  evidence  spans multiple languages and lacks a structured format. To address this issue, this study proposes a novel architecture called Dual-integrated Graph for Multilingual Fact Verification (DiG-MFV), which combines semantic representations from multilingual language models (i.e., mBERT, XLM-R, and LaBSE) with two graph-based components: an evidence graph and a semantic fusion graph. These components are processed through a dual-path architecture that integrates the outputs from a text encoder and a graph encoder, enablingdeeper semantic alignment and cross-evidence reasoning. The PolitiFact dataset was used as the source of claims and evidence. The model was evaluated by using a data split of 70% for training, 20% for validation, and10%  for  testing.  The  training  process employed  the  AdamW  optimizer,  cross-entropy  loss,  and  regularization  techniques, including dropout and early stopping based on the F1-score. The evaluation results show that DiG-MFV with LaBSE achieved an accuracy of 85.80% and an F1-score of 85.70%, outperforming the mBERT and XLM-R variants, and proved to be more effective than the DGMFP baseline model (76.1% accuracy). The model also demonstrated stable convergence during training, indicating  its  robustness  in  cross-lingual  political  fact  verification  tasks.  These  findings  encourage  further  exploration  in graph-based multilingual fact verification systems.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112776">
                <text>Nova Agustina1*, Kusrini2, Ema Utami3, Tonny Hidayat4</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112777">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6695/1104</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112778">
                <text>Department of Informatics Doctorate, Universitas Amikom Yogyakarta, Yogyakarta, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112779">
                <text>July 27, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112780">
                <text>fajar bagus w</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112781">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112782">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112783">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
</itemContainer>
