<?xml version="1.0" encoding="UTF-8"?>
<itemContainer xmlns="http://omeka.org/schemas/omeka-xml/v5" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://repository.horizon.ac.id/items/browse?collection=792&amp;output=omeka-xml&amp;page=4" accessDate="2026-04-11T03:49:31+00:00">
  <miscellaneousContainer>
    <pagination>
      <pageNumber>4</pageNumber>
      <perPage>10</perPage>
      <totalResults>33</totalResults>
    </pagination>
  </miscellaneousContainer>
  <item itemId="10570" public="1" featured="1">
    <fileContainer>
      <file fileId="10606">
        <src>https://repository.horizon.ac.id/files/original/961a91d4d3ccef3e2068842b247f20c3.pdf</src>
        <authentication>d2f677b2f84493cbc285ad9bef85a238</authentication>
      </file>
    </fileContainer>
    <collection collectionId="792">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112916">
                  <text>Vol 9 No 5 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="113005">
                <text>Stacking Ensemble Learning Model for Intrusion Detection in Electrical Substation</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="113006">
                <text> electrical substations; intrusion detection system; machine learning; stacking ensemble learning</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="113007">
                <text>Electrical  substations  are  crucial  infrastructure  in  power  transmission  and  distribution  but  are  increasingly  vulnerable  to  cyber threats. However, existing intrusion detection systems (IDS) face several limitations, such as high false positive rates,weak  in  anticipating  new  attack  patterns,  and  imbalances  in  detecting  different  types  of  intrusions.  This  study  proposes  a  Stacking  Ensemble  Learning  model  to  enhance  intrusion  detection  accuracy  in  electrical  substations.  The  proposed  model  integrates  Logistic  Regression  (LR),  K-Nearest  Neighbors  (KNN),  Support  Vector  Machine  (SVM),  and  XGBoost  (XGB)  as  base models with XGB acting as the meta-model. A real-world electrical substation IEC 60870-5-104 network traffic dataset comprising 319,949 instances with multiple attacks, such as DoS, Port Scan, NTP DdoS, IEC 104 Starvation, Fuzzy Attack, Flood Attack, and MITM, was used in this study. The results demonstrate that the stacking model achieves the best performance, with accuracy (0.99990), precision (0.99990), recall (0.99990), and F1 score (0.99990), surpassing the base model, Bagging, and Boosting. T-test results further confirmed statistical significance, with p-values of 0.00428 (LR), 0.04237 (SVM), 0.00000 (XGB),  0.00057  (KNN),  0.00549  (Boosting),  and  0.00000  (Bagging)  reinforcing  the  superiority  of  the  proposed  methodapproach. These findings highlight the effectiveness of Stacking Ensemble Learning in enhancing the detection performanceof IDS for electrical substations and outperforming traditional models and other ensemble learning methods</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="113008">
                <text>Mohammad Mahruf Alam1,  Feddy Setio Pribadi2, Rizky Ajie Aprilianto3, Arvina Rizqi Nurul’aini4</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="113009">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6502/1139</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="113010">
                <text>Department of Electrical Engineering, Faculty of Engineering, Universitas Negeri Semarang, Semarang, Indonesia</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="113011">
                <text>October 11, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="113012">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="113013">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="113014">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="113015">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10569" public="1" featured="1">
    <fileContainer>
      <file fileId="10605">
        <src>https://repository.horizon.ac.id/files/original/9da6740c76ded7220b2837976ea8a603.pdf</src>
        <authentication>1f7b4842f4f6c4b3afb8bf897934c73e</authentication>
      </file>
    </fileContainer>
    <collection collectionId="792">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112916">
                  <text>Vol 9 No 5 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112994">
                <text>Breast Cancer Histopathological Image Classification  with Convolutional Neural Networks Models</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112995">
                <text>breast cancer; histopathological image classification; deep learning; convolutional neural network; support vector machine</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112996">
                <text>Early  diagnosis  and  treatment  can  reduce  mortality  rates  by  preventing  the  progression  of  breast  cancer.  Owing  to  convolutional  neural  networks  (CNN),breast  cancer  diagnosis  can  be  performed  faster  and  more  objectively  than  humans  using  thousands  of  histopathological  images.  This  study  aimed  to  evaluate  and  compare  the  rapid  and  effective  diagnostic  performance of CNN models on breast tumor images, utilizing transfer learning through pre-training and fine-tuning on novel datasets. The study was performed in two ways on BreakHis and BACH datasets. First, fine-tuned VGG16, VGG19, Xception, InceptionV3, ResNet50, and InceptionResNetV2 models were used for classification. Second, these CNN models were used as feature extractors and support vector machines (SVMs) as classifiers. The success of all models in tumor classification was interpreted using performance metrics, such as accuracy, precision, recall, F1 score, and AUC. The models showing the best performance as a result of the analyses were as follows: InceptionResNetV2+SVM model with an accuracy of 99.3%, precision of  99.0%,  recall  of  100.0%,  F1  score  of  99.5%,  AUC  of  98.9%  for  BreakHis  dataset;  and  InceptionResNetV2  model  with  accuracy  of  96.7%,  precision  of  93.8%,  recall  of  100.0%,  F1  score  of  96.8%,  AUC  of  96.7%  for  the  BACH  dataset.  As  a  conclusion, it has been seen that the CNN methods have good generalization abilities and can respond to clinical needs</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112997">
                <text>Isil Unaldi1,  Leman Tomak2</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112998">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6420/1130</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112999">
                <text>Department of Biostatistics and Medical Informatics, Faculty of Medicine, Ondokuz Mayis University, Samsun, Türkiye</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="113000">
                <text>September29, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="113001">
                <text>Isil Unaldi1,  Leman Tomak</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="113002">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="113003">
                <text>INDONESIA</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="113004">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="10563" public="1" featured="1">
    <fileContainer>
      <file fileId="10576">
        <src>https://repository.horizon.ac.id/files/original/21e42433a211ee46ec348000a9675eb0.pdf</src>
        <authentication>1f7b4842f4f6c4b3afb8bf897934c73e</authentication>
      </file>
    </fileContainer>
    <collection collectionId="792">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="112916">
                  <text>Vol 9 No 5 (2025)</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <itemType itemTypeId="1">
      <name>Text</name>
      <description>A resource consisting primarily of words for reading. Examples include books, letters, dissertations, poems, newspapers, articles, archives of mailing lists. Note that facsimiles or images of texts are still of the genre Text.</description>
    </itemType>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112928">
                <text>Breast Cancer Histopathological Image Classification  with Convolutional Neural Networks Models</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112929">
                <text>breast cancer; histopathological image classification; deep learning; convolutional neural network; support vector machine</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112930">
                <text>Early  diagnosis  and  treatment  can  reduce  mortality  rates  by  preventing  the  progression  of  breast  cancer.  Owing  to  convolutional  neural  networks  (CNN),breast  cancer  diagnosis  can  be  performed  faster  and  more  objectively  than  humans  using  thousands  of  histopathological  images.  This  study  aimed  to  evaluate  and  compare  the  rapid  and  effective  diagnostic  performance of CNN models on breast tumor images, utilizing transfer learning through pre-training and fine-tuning on novel datasets. The study was performed in two ways on BreakHis and BACH datasets. First, fine-tuned VGG16, VGG19, Xception, InceptionV3, ResNet50, and InceptionResNetV2 models were used for classification. Second, these CNN models were used as feature extractors and support vector machines (SVMs) as classifiers. The success of all models in tumor classification was interpreted using performance metrics, such as accuracy, precision, recall, F1 score, and AUC. The models showing the best performance as a result of the analyses were as follows: InceptionResNetV2+SVM model with an accuracy of 99.3%, precision of  99.0%,  recall  of  100.0%,  F1  score  of  99.5%,  AUC  of  98.9%  for  BreakHis  dataset;  and  InceptionResNetV2  model  with  accuracy  of  96.7%,  precision  of  93.8%,  recall  of  100.0%,  F1  score  of  96.8%,  AUC  of  96.7%  for  the  BACH  dataset.  As  a  conclusion, it has been seen that the CNN methods have good generalization abilities and can respond to clinical needs.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112931">
                <text>sil Unaldi1,  Leman Tomak</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="112932">
                <text>https://jurnal.iaii.or.id/index.php/RESTI/article/view/6420/1130</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="45">
            <name>Publisher</name>
            <description>An entity responsible for making the resource available</description>
            <elementTextContainer>
              <elementText elementTextId="112933">
                <text>Department of Biostatistics and Medical Informatics, Faculty of Medicine, Ondokuz Mayis University, Samsun, Türkiye</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112934">
                <text>September29, 2025</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112935">
                <text>FAJAR BAGUS W</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112936">
                <text>PDF</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112937">
                <text>ENGLISH</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="112938">
                <text>TEXT</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
</itemContainer>
