<?xml version="1.0" encoding="UTF-8"?>
<itemContainer xmlns="http://omeka.org/schemas/omeka-xml/v5" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://repository.horizon.ac.id/items/browse?collection=788&amp;output=omeka-xml&amp;sort_field=Dublin+Core%2CTitle" accessDate="2026-04-14T19:41:40+00:00">
  <miscellaneousContainer>
    <pagination>
      <pageNumber>1</pageNumber>
      <perPage>10</perPage>
      <totalResults>28</totalResults>
    </pagination>
  </miscellaneousContainer>
  <item itemId="10527" public="1" featured="1">
    <fileContainer>
      <file fileId="10540">
        <src>https://repository.horizon.ac.id/files/original/2a0c36c807df92a3ace4ace9b9b497a3.pdf</src>
        <authentication>54877d49de2f23442359b73d6703ec35</authentication>
      </file>
    </fileContainer>
    <collection collectionId="788">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112329">
                  <text>Vol 9 No 3 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112529">
                <text>A Multi-Objective Particle Swarm OptimizationApproach for Optimizing K-Means Clustering Centroids</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112530">
                <text>centroid;  k-means; multiobjective  particle  swarm  optimization;  the  sum  of  square  within;  the  sum  of  square between</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112531">
                <text>The K-Means algorithm is a popular unsupervised learning method used for data clustering. However, its performance heavily depends on centroid initialization and the distribution shape of the data, making it less effective for datasets with complex or non-linear  cluster  structures.  This  study  evaluates  the  performance  of  the  standard  K-Means  algorithm  and  proposes  a Multiobjective  Particle  Swarm  Optimization  K-Means  (MOPSO+K-Means)  approach  to  improve  clustering  accuracy.  The evaluation was conducted on five benchmark datasets: Atom, Chainlink, EngyTime, Target, and TwoDiamonds. Experimental results show that K-Means is effective only on datasets with clearly separated clusters, such as EngyTime and TwoDiamonds, achieving  accuracies  of  95.6%  and  100%,  respectively.  In  contrast,  MOPSO+K-Means  achieved  a  substantial  accuracy improvement on the complex Target dataset, increasing from 0.26% to 59.2%. The TwoDiamonds dataset achieved the most desirable  trade-off:  it  had  the  lowest  SSW  (1323.32),  relatively  high  SSB  (2863.34),  and  lowest  standard  deviation  values, indicating  compact  clusters,  good  separation,  and  high  consistency  across  runs.  These  findings  highlight  the  potential  of swarm-based optimization to achieve consistent and accurate clustering results on datasets with varying structural complexity</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112532">
                <text>Aina Latifa Riyana Putri1*, Joko Riyono2, Christina Eni Pujiastuti3, Supriyadi4</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112533">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6533/1086</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112534">
                <text>Data Science, Telkom University, Purwokerto, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112535">
                <text>June 21, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112536">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112537">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112538">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112539">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10529" public="1" featured="1">
    <fileContainer>
      <file fileId="10542">
        <src>https://repository.horizon.ac.id/files/original/ce2d04e1e1473a399d2feb001bc5ffe9.pdf</src>
        <authentication>cfe84a045b49efc0a65db90cc5f4e4e6</authentication>
      </file>
    </fileContainer>
    <collection collectionId="788">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112329">
                  <text>Vol 9 No 3 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112551">
                <text>A New Framework for Dynamic Educational Marketing Segmentation in Student Recruitment: Optimizing Fuzzy C-Means with Metaheuristic Techniques</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112552">
                <text>dynamic educational marketing;fuzzy C-Means; metaheuristic optimization; RFM; student recruitment</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112553">
                <text>An  effective  educational  marketing  strategy  requires  accurate  school  segmentation  to  enhance  new  student  recruitment. Traditional  segmentation  methods  such  as  K-means  are  often  used,  but  they  have  limitations  in  capturing  the  flexibility  of school  characteristics.  Fuzzy  C-Means  (FCM)  offers  a  more  adaptive  approach  by  allowing  each  school  to  simultaneously have a degree of membership in several clusters. However, the performance of FCM highly depends on determining parameters such as the number of clusters (k) and the level of fuzziness (m), which are not always optimal when determined manually. This study develops a new framework for dynamic educational marketing segmentation in student recruitment by optimizing FCM using three metaheuristic techniques: Genetic Algorithm (GA), Particle Swarm Optimization (PSO), and Differential Evolution (DE). Performance was evaluated using theFuzzy Silhouette Index (FSI). The experimental results showed that DE yielded the best results with the highest FSI value (0.8023), producing eight main clusters based on the Recency, Frequency, and Monetary(RFM) model. Based on the clustering results, apersonalized and adaptive marketing strategy was designed to enhance the effectiveness   of   student   recruitment.   The   proposed   framework   enhances   segmentation   accuracy   and   supports   the implementation of dynamic data-driven marketing in the context of higher education. This study also opens new directions for educational data mining research and machine-learning-based marketing strategies.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112554">
                <text>Rizal Bakri1*, Bobur Sobirov2, Niken Probondani Astuti3, Ansari Saleh Ahmar4, Pawan Kumar Singh</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112555">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6515/1090</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112556">
                <text>Departmentof Digital Business, Faculty of Economics and Business, Makassar State University, Makassar, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112557">
                <text> June 22, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112558">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112559">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112560">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112561">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10536" public="1" featured="1">
    <fileContainer>
      <file fileId="10549">
        <src>https://repository.horizon.ac.id/files/original/18bbbea9619077aea7cb62c343156bb6.pdf</src>
        <authentication>2c192a304f6d2730e8dac7d52279bf54</authentication>
      </file>
    </fileContainer>
    <collection collectionId="788">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112329">
                  <text>Vol 9 No 3 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112628">
                <text>Automated Indonesian Plate Recognition: YOLOv8 Detection and TensorFlow-CNN Character Classification</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112629">
                <text>YOLO; TensorFlow; optical character recognition (OCR); indonesian license plate detection; deep learning</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112630">
                <text>The  precise  identification  and  reading  of  Indonesian  vehicle  number  plates  are  important  in  many  areas,  including  the enforcement  of  law,  collection  of  charges,  management  of  parking  areas,  and  safety  measures.  This  study  integrates  the implementation  of  the  YOLOv8  object  detection  algorithm  with  three  OCR  methods:  EasyOCR,  TesseractOCR,  and TensorFlow.  YOLOv8  is  capable  ofidentifying license  plates from images  and  videos at  a  high  speed  and  reliability  under different  conditions  and  therefore  is  used  in  this  study  to  perform  plate  detection  in  images  and  videos.  After  licenses  are detected,  OCR  techniques  are  performed  to segment  and  read  the  letters.  Both  EasyOCR  and  TesseractOCR  performed moderately well on static images achieving accuracy rates of 70% and 68% respectively, but both suffered significantly lower performance in video scenarios. Of the 100 video frames, EasyOCR was able to correctly identify characters in 61 frames and TesseractOCR  in  58  frames,  while  the  TensorFlow-based  model  outperformed  the  other  two  with  75  correct  recognitions. Furthermore, easy OCR and static images as input while the TensorFlow-based models completed them with 100% accuracy. This observation can be explained by its design, which utilizes a CNN with ReLU activation and Softmax outputs, trained on 10,261  annotated  characters  and  was  enhanced  with  five  different  data  augmentation  techniques.  The  model  shows  strong performance  in  its  ability to  handle  dynamic conditions  such  as motion  blur,  changing  light  conditions,  and  rotation  of the plate angle. The results underscore the drawbacks of one-size-fits-all OCR applications in real-world usecases and stress the need  for  bespoke  model  training,  as  well  as  hierarchical  contouring,  in  the  context  of  automatic  license  plate  recognition (ALPR). This study provides additional insights into ALPR systems by delivering a robust, scalable, and real-time tool for plate and character recognition, which is essential for intelligent transportation systems</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112631">
                <text>Windu Gata1*, Dwiza Riana2, Muhammad Haris3, Maria Irmina Prasetiyowati4, Dika Putri Metalica5</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112632">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6310/1066</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112633">
                <text>Computer Science, Faculty of Information Technology, Universitas Nusa Mandiri, Jakarta, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112634">
                <text>June 15, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112635">
                <text>The  precise  identification  and  reading  of  Indonesian  vehicle  number  plates  are  important  in  many  areas,  including  the enforcement  of  law,  collection  of  charges,  management  of  parking  areas,  and  safety  measures.  This  study  integrates  the implementation  of  the  YOLOv8  object  detection  algorithm  with  three  OCR  methods:  EasyOCR,  TesseractOCR,  and TensorFlow.  YOLOv8  is  capable  ofidentifying license  plates from images  and  videos at  a  high  speed  and  reliability  under different  conditions  and  therefore  is  used  in  this  study  to  perform  plate  detection  in  images  and  videos.  After  licenses  are detected,  OCR  techniques  are  performed  to segment  and  read  the  letters.  Both  EasyOCR  and  TesseractOCR  performed moderately well on static images achieving accuracy rates of 70% and 68% respectively, but both suffered significantly lower performance in video scenarios. Of the 100 video frames, EasyOCR was able to correctly identify characters in 61 frames and TesseractOCR  in  58  frames,  while  the  TensorFlow-based  model  outperformed  the  other  two  with  75  correct  recognitions. Furthermore, easy OCR and static images as input while the TensorFlow-based models completed them with 100% accuracy. This observation can be explained by its design, which utilizes a CNN with ReLU activation and Softmax outputs, trained on 10,261  annotated  characters  and  was  enhanced  with  five  different  data  augmentation  techniques.  The  model  shows  strong performance  in  its  ability to  handle  dynamic conditions  such  as motion  blur,  changing  light  conditions,  and  rotation  of the plate angle. The results underscore the drawbacks of one-size-fits-all OCR applications in real-world usecases and stress the need  for  bespoke  model  training,  as  well  as  hierarchical  contouring,  in  the  context  of  automatic  license  plate  recognition (ALPR). This study provides additional insights into ALPR systems by delivering a robust, scalable, and real-time tool for plate and character recognition, which is essential for intelligent transportation systems.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112636">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112637">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112638">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10532" public="1" featured="1">
    <fileContainer>
      <file fileId="10545">
        <src>https://repository.horizon.ac.id/files/original/165b83d1410f0c2180ed5d0892ff96ce.pdf</src>
        <authentication>7e4bd774aee8bb85a7e213ddbbb97143</authentication>
      </file>
    </fileContainer>
    <collection collectionId="788">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112329">
                  <text>Vol 9 No 3 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112584">
                <text>Automatic Classification of MultilanguageScientific Papersto the Sustainable Development Goals Using Transfer Learning</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112585">
                <text>multilingual model; multilabel text classification; scientific papers; SDGs research</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112586">
                <text>The classification of scientific papers according to their relevance to Sustainable Development Goals (SDGs) is a critical task in identifying the research development status of goals. However, with the growing volume of scientific literature published worldwide  in  multiple  languages,  manual  categorization  of  these  papers  has  become  increasingly  complex  and  time-consuming. Furthermore, the need for a comprehensive multilingual dataset to train effective models complicates the task, as obtaining  such  datasets  for  various  languages  is  resource  intensive.  This  study  proposes  a  solution  to  this  problem  by leveraging transfer learning techniques to automatically classify scientific papers into SDG labels. By fine-tuning pretrained multilingual models mBERT on SDG publication datasets in a multilabel approach, we demonstrate that transfer learning can significantly improve classification performance, even with limited labelled data, compared to SVM. Our approach enables the  effective  processing  of  scientific  papers  in  different  languages  and  facilitates  the  seamless  mapping  of  research  to  the relevance of SDGs, the four pillars of SDGs, and the 17 goals of SDGs. The proposed method addresses the scalability issue in SDG classification and lays the groundwork for more efficient systems that can handle the multilingual nature of modern scientific publications.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112587">
                <text>Lya Hulliyyatus Suadaa1*, Anugerah Karta Monika2, Berliana Sugiarti Putri3, Yeni Rimawat</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112588">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6560/1093</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112589">
                <text>Politeknik Statistika STIS, Jakarta, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112590">
                <text> June 23, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112591">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112592">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112593">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112594">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10519" public="1" featured="1">
    <fileContainer>
      <file fileId="10532">
        <src>https://repository.horizon.ac.id/files/original/ba4df9933c24663c15ad56c1f7cfb1bc.pdf</src>
        <authentication>b6a98f974df9b493772ae1a86db07365</authentication>
      </file>
    </fileContainer>
    <collection collectionId="788">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112329">
                  <text>Vol 9 No 3 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112441">
                <text>Benchmarking Metaheuristic Algorithms Against Optimization Techniques for Transportation Problem in Supply Chain Management</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112442">
                <text>optimization;supply chain management;MODI, simulated annealing;particle swarm optimization</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112443">
                <text>The optimization of transportation problems plays a significant role in supply chain management (SCM), where minimizing costs and improving efficiency are mandatory. The transition from manual methods to advanced computational approaches, such   as   metaheuristic   algorithms,   enhances   decision-making   and   consolidates   operations   within   SCM.   Malaysia's transportation system has been confronting crucial challenges, characterized by congested roadways, limited rail connectivityand inefficient port operations, which interfere with the fluidity of goods and supply chain efficiency. This highlights the critical need for optimization techniques to enhance competitiveness and efficiency in the evolving SCM landscape. The research aims to explore the application of metaheuristic algorithms, with the Modified Distribution (MODI) method as the benchmark while employing  the  NorthWest  Corner  Method  (NWCM)  to  obtain  an  initial  feasible  solution,  to  evaluate  their  performance  in optimizing  transportation  problems.  Metaheuristic  algorithms,  specifically  Simulated  Annealing  (SA)  and  Particle  Swarm Optimization  (PSO),  are  implemented  to  explore  alternative  near-optimal  solutions  and  assess  the  performance  in  terms  of cost  accuracy  and  computational  efficiency.  The  results  indicate  that  SA  achieves  a  deviation  of  12.92%  in  cost  accuracy compared to the optimal MODI method, making it suitable for scenarios where precision is critical, whereas PSO which is 296.92 seconds faster, is ideal for time-sensitive applications. Finally, this study encourages future studies to explore additionalalgorithms,  external  factors  and  broader  applications  for  enhanced  real-world  relevance  and  scalability  to  accentuate  the potential of metaheuristic algorithms.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112444">
                <text>Felicia Lim Xin Ying1, Suliadi Firdaus Bin Sufahani2*</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112445">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6513/1064</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112446">
                <text>Department of Mathematics and Statistics, Faculty of Applied Sciences and Technology, Universiti Tun Hussein Onn Malaysia, Muar, Malaysia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112447">
                <text>June 12, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112448">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112449">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112450">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112451">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10509" public="1" featured="1">
    <fileContainer>
      <file fileId="10522">
        <src>https://repository.horizon.ac.id/files/original/8e4009fb01dd43a21641f52df15fc78e.pdf</src>
        <authentication>d030a7011af66f7c894f49b99a4449d9</authentication>
      </file>
    </fileContainer>
    <collection collectionId="788">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112329">
                  <text>Vol 9 No 3 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112330">
                <text>Classification of Red Foxes: Logistic Regression and SVM with VGG-16, VGG-19, and Inception V3</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112331">
                <text>red fox images; image classification; deep learning models</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112332">
                <text>Deep  learning  models  demonstrate  a  high  degree  of  accuracy  in  image  classification.  The  task  of  distinguishing  between various sources of red fox images—such as authentic photographs, game-captured images, hand-drawn illustrations, and AI-generated images—raises important considerations regarding realism, texture, and style.This study conducts an evaluation of three deep learning architectures: Inception V3, VGG-16, and VGG-19, utilizing images of red foxes. The research employs Silhouette  Graphs,  Multidimensional  Scaling  (MDS),  and  t-Distributed  Stochastic  Neighbor  Embedding  (t-SNE)  to  assess clustering and classification efficiency. Support Vector Machines (SVM) and Logistic Regression are utilized to compute the Area Under the Curve (AUC), Classification Accuracy (CA), and Mean Squared Error (MSE). The MDS plots and t-SNE data clearly demonstrate the capability of the three deep learning models to distinguish between the image categories.For game-captured images, VGG-16 and VGG-19 demonstrate quite outstanding performance with silhouette scores of 0.398 and 0.315, respectively. This study explores the enhancement of classification accuracy in logistic regression and support vector machines (SVM)  through  the  refinement  of  decision  boundaries  for  overlapping  categories.  Utilizing  Inception  V3,  an  artificial intelligence-generated image silhouette score of 0.244 was achieved, demonstrating proficiency in image classification. The research highlights the challenges posed by diverse datasets and the efficacy of deep learning models in the classification of red  fox  images.  The  findings  suggest  that  integrating  deep  learning  with  machine  learning  classifiers,  such  as  logistic regression and SVM, may improve classification accuracy.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112333">
                <text>Brian Sabayu1*, Imam Yuadi2</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112334">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6356/1054</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112335">
                <text>Master’s Program Human Resource Development-Data Analytics, Graduate School, Universitas Airlangga,Surabaya,Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112336">
                <text>May 24, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112337">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112338">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112339">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112340">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10534" public="1" featured="1">
    <fileContainer>
      <file fileId="10547">
        <src>https://repository.horizon.ac.id/files/original/40c47546e0bbeec6c93ce995f42bfbca.pdf</src>
        <authentication>54877d49de2f23442359b73d6703ec35</authentication>
      </file>
    </fileContainer>
    <collection collectionId="788">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112329">
                  <text>Vol 9 No 3 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112606">
                <text>Classification of Retinoblastoma Eye Disease on Digital Fundus Images Using Geometric Features and Machine Learning</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112607">
                <text>retinoblastoma; digital fundus images; classification; geometric features; machine learning</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112608">
                <text>Medical image analysis is essential for detecting retinoblastoma tumors due to the ability of the method to assist doctors in examining the morphology, density, and distribution of blood vessels. The classification of normal and retinoblastoma-affected retinas is a preliminary stepin treating retinoblastoma tumors. Therefore, this research aimed to propose the new development of a method to classify normal and retinoblastoma-affected retinas using geometric feature extraction and machine learning. The workflow consisted of (1) Fundus image data collection for retinoblastomas, (2) image segmentation, (3) feature extraction process, (4) building a classification model using machine learning, (5) splitting testing and training data, (6) classification process  using  machine  learning  methods,  and  (7)  evaluation  of  classification  results  using  a  confusion  matrix.  The  results showed that the segmentation method used could detect retinoblastoma areas and extract geometric features. The SVM method achieved an accuracy of 0.96 while the RF andDT had 0.55 and 0.63, respectively. Moreover, the comparison with previous research showed that the method proposed had a 4% improvement in classification performance. This led to the conclusion that the classification using geometric features combined with the SVM on digital fundus images of retinoblastoma eye disease produced the best results</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112609">
                <text>Arif Setiawan</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112610">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6337/1058</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112611">
                <text>Department of Information System, Faculty of Engineering, Muria Kudus University, Kudus, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112612">
                <text> May 24, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112613">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112614">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112615">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112616">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10510" public="1" featured="1">
    <fileContainer>
      <file fileId="10523">
        <src>https://repository.horizon.ac.id/files/original/1429755af966552d26d35c637c538123.pdf</src>
        <authentication>5cd10e98fe9ded1c1318e52dbf291e59</authentication>
      </file>
    </fileContainer>
    <collection collectionId="788">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112329">
                  <text>Vol 9 No 3 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112341">
                <text>Comparative Evaluation of Preprocessing Methods for MobileNetV1 and V2 in Waste Classification</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112342">
                <text>waste;MobileNetV1; MobileNetV2; preprocessing; waste classification</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112343">
                <text>Waste management remains a critical challenge for many countries, including Indonesia, which ranks as the world's second-largest contributor of waste. As tens of millions of tons are produced each year and the management system remains ineffective, environmental  conditions  and  public  health  continue  to  deteriorate.  To address  this  issue,  it  is  imperative  to  develop  more accurate  and  efficient  solutions  to  enhance  waste  classification  and  management.  This  study  investigates  the  influence  of various image preprocessing techniques on the performance of MobileNetV1 and MobileNetV2 models in the classification of waste images. Preprocessing is crucial for enhancing data quality, particularly when dealing with real-world images that are affected  by  inconsistent  lighting,  texture,  and  clarity.  Five  preprocessing  scenarios  were evaluated:  Baseline,  CLAHE  with Bilateral  Filtering,  CLAHE  with  Sharpening,  Grayscale  with  CLAHE,  and  Gaussian  Blur  with  Bilateral  Filtering.  Among these, the combination of CLAHE and Bilateral Filtering applied to MobileNetV1 achieved the best results, with 85% training accuracy, 96% validation accuracy, a training loss of 0.3178, and the lowest validation loss of 0.1630. Overall, MobileNetV1 benefited more significantly from preprocessing variations than MobileNetV2, particularly in terms of accuracy improvement and  reduction  in  prediction error.  These  findings underscore  the importance  of  effective preprocessing  in  enhancing model performance for waste image classification</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112344">
                <text>Aulia Afifah1, Endah Ratna Arumi2*, Maimunah3, Setiya Nugroho</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112345">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6211/1055</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112346">
                <text>Informatics Engineering, Engineering, Universitas Muhammadiyah Magelang, Magelang, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112347">
                <text>May 24, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112348">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112349">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112350">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112351">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10526" public="1" featured="1">
    <fileContainer>
      <file fileId="10539">
        <src>https://repository.horizon.ac.id/files/original/dfd0b0c090318af06e95d7ae99c2d9dd.pdf</src>
        <authentication>54877d49de2f23442359b73d6703ec35</authentication>
      </file>
    </fileContainer>
    <collection collectionId="788">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112329">
                  <text>Vol 9 No 3 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112518">
                <text>Development of IoT-based Automatic Water Drainage System on Fishing Boatto Improve Operational Efficiency</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112519">
                <text>automatic; inverter; IOT; system; sensor</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112520">
                <text>The profession of fishermen requires a reliable system to remove stagnant water from fishing boats, as manual drainage is time-consuming and inefficient. This study proposes an IoT-based automatic water drainage systemwithout using an inverter or  ultrasonic  sensor,  offering  a  cost-effective  alternative.  The  system  utilizes  a  water  level  sensor  and  a  DC  water  pump, controlled via a smartphone application. The research model used is the Research and Development (R&amp;D) model, through several  stages,  namely  potential  and  problems,  initial  data  needs,  prototype  creation,  prototype  validation,  prototype revision, validation, implementation. Problems occur at the prototype stage, problems that must be revised include aspects of wiring, Power Suitability, Water Level Sensor Test, and the configuration of the relay used. The IOT-based automatic water drainage  system  can  function  based  on  the  results  of  white-box  testing  including  Hardware  Implementation,  Software Implementation, Implementation of Application Usage, and Automatic Drainage System Testing. This is indicated by the results of the Liquid Water Level Sensor Functionality test, DC Water Pump Functionality Test, Solar Panel and Battery Functionality Test, and IOT Functionality Test. IOT-based automatic water discharge systems on fishing boats are more efficient and cost-effective  in  the  long  run,  although  diesel  engines  offer  more  reliability  under  adverse  weather  conditions  or  in  places  with limited access to sunlight</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112521">
                <text>Zulfachmi1*, Zulkipli2, Vita Rahayu3, Aggry Saputra4, Muthiah As Saidah5</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112522">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6222/1084</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112523">
                <text>Program Studi S1 Teknik Informatika,STT Indonesia Tanjung Pinang,Tanjungpinang, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112524">
                <text>June 20, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112525">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112526">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112527">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112528">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10513" public="1" featured="1">
    <fileContainer>
      <file fileId="10526">
        <src>https://repository.horizon.ac.id/files/original/baea07f27d41bd371d8e1f5022ec5390.pdf</src>
        <authentication>eb9020a4d5b5534a77d9e0fda98978ac</authentication>
      </file>
    </fileContainer>
    <collection collectionId="788">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112329">
                  <text>Vol 9 No 3 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112374">
                <text>Developmentof aDocument-Based Gait System With Interactive Visualisation forClinical Analysis</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112375">
                <text>biomechanics; dashboard; database; gait analysis; mongoDB</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112376">
                <text>Gait analysis is a crucial aspect of biomechanics and medical rehabilitation, used to detect movement disorders, assess therapy effectiveness, and understand human walking patterns. In Indonesia, gait research remains limited, with most data sourced from  abroad,  which  may  not  reflect  the  characteristicsof  the  local  population.  This  study  uses  data  from  Vicon  camera recordings that track marker movements on the subject's body and convert them into kinematic data in spatial coordinates, stored in Excel files. To support clinical applications, an efficientsystem is needed to manage gait data and present analysis results  interactively.  Therefore,  a  MongoDB-based  gait  data  management  system  was  developed  due  to  its  flexibility  in handling unstructured data and scalability. The system was designed to preprocess gait data and display the results through an interactive Streamlit dashboard. The analysis involved calculating gait angle parameters, visualized in a gait cycle anglegraph and  analyzed  statistically using  mean  and standard error  to  improve  interpretation  accuracy.  Testing  shows  that the system can store data in an average of 1.52 seconds, retrieve it in 3.598 seconds, and render visualizations in 0.192 seconds, with high accuracy and only a 0.1-degree error between the input and output. This system effectively addresses the challenge of  managing  local  gait  data  and  supports  comprehensive  biomechanical  analysis,  enabling  clinicians  to  make  informed decisions regarding rehabilitation needs based on deviations from normal gait angle ranges.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112377">
                <text>Rizal Rahman Rizkika1, Helisyah Nur Fadhilah2, Tanzilal Mustaqim3, Rifdatun Ni’mah</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112378">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6451/1074</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112379">
                <text>Departmentof Data Science, Surabaya Directorate, Telkom University, Surabaya, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112380">
                <text>June 16, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112381">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112382">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112383">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112384">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
</itemContainer>
