<?xml version="1.0" encoding="UTF-8"?>
<itemContainer xmlns="http://omeka.org/schemas/omeka-xml/v5" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://repository.horizon.ac.id/items/browse?collection=689&amp;output=omeka-xml&amp;sort_field=Dublin+Core%2CTitle" accessDate="2026-04-11T09:01:32+00:00">
  <miscellaneousContainer>
    <pagination>
      <pageNumber>1</pageNumber>
      <perPage>10</perPage>
      <totalResults>5</totalResults>
    </pagination>
  </miscellaneousContainer>
  <item itemId="9174" public="1" featured="1">
    <fileContainer>
      <file fileId="9198">
        <src>https://repository.horizon.ac.id/files/original/009d16710ba3633fbd0b9603ea18d7b8.pdf</src>
        <authentication>88161295abc60a8b5a75e8383958765f</authentication>
      </file>
    </fileContainer>
    <collection collectionId="689">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="98451">
                  <text>VOL. 13 NO. 4 (2024) DECEMBER 2024</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98512">
                <text>Deep Learning Model for Crop Diseases and Pest Classification</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98513">
                <text>Deep learning, convolution neural network, agricultural technology, machine learning, image recognition.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98514">
                <text>The study on deep learning models for crop diseases and pest classification looked at how these models may enhance agricultural practices, specifically for the purpose of more precise pest and crop disease classification. The research brought attention to the fact that agricultural diseases and pests pose a threat to global food security and that farmers need innovative solutions, like deep learning models, to combat these issues. The accuracy of the classification was tested using DenseNet and other deep learning models trained using secondary datasets sourced from the Kaggle website. The study compared DenseNet against many other models using a comprehensive evaluation technique. These models were AlexNet, EfficientNet, Visual Geometry Group, and Convolution Neural Network. In comparison to the other models, DenseNet achieved an outstanding accuracy score of 96.988% on the maize disease dataset and 96.9382% on the pests dataset. Due to DenseNet's enhanced performance, which was brought about by its ability to efficiently gather complex features and patterns within the visual input, resulted in more precise predictions. The study discussed the consequences of DenseNet's high accuracy, suggesting that its complex architecture made it ideal for pest and crop disease classification in agriculture. Also, the researcher looked at the possibility of integrating DenseNet into real-world agricultural systems, where its robust performance might significantly improve crop monitoring and disease management technologies. The research concluded with a list of potential areas for further research, including exploring the applicability of DenseNet to other crop types and investigating the possibility of hybrid models or transfer learning to enhance its performance.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98515">
                <text>Vincent Mbandu Ochango, Geoffrey Mariga Wambugu, Aaron Mogeni Oirere</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="98516">
                <text>www.ijcit.com</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98517">
                <text>December 2024</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98518">
                <text>peri irawan</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98519">
                <text>pdf</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98520">
                <text>english</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98521">
                <text>text</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="9176" public="1" featured="1">
    <fileContainer>
      <file fileId="9200">
        <src>https://repository.horizon.ac.id/files/original/865c6c772d9e3b02bceee7f74611e22f.pdf</src>
        <authentication>1756b2db90abd784421768cd83b864c3</authentication>
      </file>
    </fileContainer>
    <collection collectionId="689">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="98451">
                  <text>VOL. 13 NO. 4 (2024) DECEMBER 2024</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98532">
                <text>Embedded Feature Selection Augmented Thyroid Disorder Prediction using MLP</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98533">
                <text>Thyroid Disorder, Feature Selection, Classification, Deep Neural Network.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98534">
                <text>Due to its considerable fatality rate and increasing frequency, thyroid disorders pose a severe hazard to people's health in the modern era. Thus, it has become a useful topic to predict thyroid disease early on using a few basic physical indications that are gathered from routine physical examinations. Being aware of these thyroid-related signs is crucial from a clinical standpoint in order to forecast outcomes and offer a solid foundation for additional diagnosis. However, manual analysis and prediction are difficult and tiring due to the vast volume of data. Our goal is to use a variety of bodily signs to swiftly and reliably predict thyroid disorders. This research presents a novel prediction model for thyroid disorders. We provide a deep neural network and embedded feature selection method-based algorithm for predicting thyroid disorders. Based on the LinearSVC algorithm, this embedded feature selection method selects a subset of characteristics that are strongly linked</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98535">
                <text>Mir Saleem1, Shabir Najar2, Malik Akhtar Rasool3</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="98536">
                <text>www.ijcit.com</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98537">
                <text>December 2024</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98538">
                <text>peri irawan</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98539">
                <text>pdf</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98540">
                <text>english</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98541">
                <text>text</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="9179" public="1" featured="1">
    <fileContainer>
      <file fileId="9203">
        <src>https://repository.horizon.ac.id/files/original/224fea49a91dab85bbedcf730ae81404.pdf</src>
        <authentication>f25dd25329af3400ce7773d808b85c13</authentication>
      </file>
    </fileContainer>
    <collection collectionId="689">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="98451">
                  <text>VOL. 13 NO. 4 (2024) DECEMBER 2024</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98562">
                <text>Enhancing Image Processing Capabilities based on Optimized Neural Networks.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98563">
                <text>Deep Learning, CNN Optimization, Batch Normalization, Dropout, Regularization Techniques, Implementation Code</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98564">
                <text>Image processing is the ability of machines to interpret and understand visual data, has been significantly advanced by Convolutional Neural Networks (CNNs). This study investigates the enhancement of image procesing performance through the optimization of CNN architectures. By performing comparison between basic CNN models with optimized versions, incorporating advanced techniques such as deeper convolutional layers, batch normalization, dropout, and data augmentation, the aim of the study is to improve accuracy and robustness in image detection and classification tasks. The experiments are carried out on benchmark datasets and the results demonstrate that optimized CNNs substantially outperform their basic counterparts, achieving higher training and validation accuracies. These findings highlight the critical role of architectural refinements and regularization techniques in advancing visual intelligence capabilities. This research presents a novel approach that underscores the capability of optimized CNNs to drive future innovations in the area of visual intelligence, offering more accurate and reliable visual data interpretation for real life applications.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98565">
                <text>Kavita Mittal</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="98566">
                <text>www.ijcit.com</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98567">
                <text>December 2024</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98568">
                <text>peri irawan</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98569">
                <text>pdf</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98570">
                <text>english</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98571">
                <text>text</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="9169" public="1" featured="1">
    <fileContainer>
      <file fileId="9193">
        <src>https://repository.horizon.ac.id/files/original/34174c60b32078059da2b25e2960297d.pdf</src>
        <authentication>7d3963ae332dd32325219c4dee2d4f89</authentication>
      </file>
    </fileContainer>
    <collection collectionId="689">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="98451">
                  <text>VOL. 13 NO. 4 (2024) DECEMBER 2024</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98462">
                <text>Ensemble Feature Selection for Network Intrusion Detection: Combining Information Gain and Random Forest with Recursive Feature Elimination</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98463">
                <text>Classification; ensemble; feature selection; network intrusion detection system; pre-processing; recursive feature elimination.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98464">
                <text>Network intrusion detection systems (NIDS) are essential for protecting computer networks against cyberattacks. The selection of a nominal set of essential features that may adequately discriminate malicious traffic from the normal traffic is indispensable while developing a NIDS. As such, a more reliable and accurate detection result may be realized when intrusion detection is carried out on a dataset based on an inclusive feature representation. This work presents the pre-processing and feature selection workflow as well as its results in the case of the CIC-IDS-2017 dataset with a focus on two cyber-attacks namely Denial-of-Service (DoS) and PortScan. The study applied an ensemble feature selection method based on information gain and Random Forest to filter out important features. Recursive Feature Elimination method was then applied to the reduced features to optimize the selected feature subset. The selected feature subset was experimented with using two classification algorithms, namely support vector machine and multi-layer perceptron. In the evaluation process, four widely used performance metrics were considered. The study results demonstrated the efficacy of the proposed ensemble approach to optimize the selected feature subset for detecting PortScan and DoS attacks in network traffic. Experimental results revealed that the support vector machine had a slight advantage in accuracy and could train more quickly. According to the study's evaluation, the NIDS may be able to shorten processing times without sacrificing the ability to detect PortScan and DoS attacks accurately by choosing a narrow subset of informative features. This suggests the approach might be applicable to real-world NIDS scenarios involving these attacks. The study also provides encouraging perspectives on how ensemble feature selection utilizing MLP and SVM can enhance the effectiveness of NIDS. Building on these findings, more research can create NIDS solutions that are even more reliable and efficient for the dynamic field of cybersecurity.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98465">
                <text>Stephen Kahara Wanjau, Gabriel Ndung’u Kamau</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="98466">
                <text>www.ijcit.com</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98467">
                <text>December 2024</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98468">
                <text>peri irawan</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98469">
                <text>pdf</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98470">
                <text>english</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98471">
                <text>text</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
  <item itemId="9171" public="1" featured="1">
    <fileContainer>
      <file fileId="9195">
        <src>https://repository.horizon.ac.id/files/original/bbbac967120ea0af146b57ee0bbcf5a8.pdf</src>
        <authentication>2b74b987e83f4eec2efc849929fa0854</authentication>
      </file>
    </fileContainer>
    <collection collectionId="689">
      <elementSetContainer>
        <elementSet elementSetId="1">
          <name>Dublin Core</name>
          <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
          <elementContainer>
            <element elementId="50">
              <name>Title</name>
              <description>A name given to the resource</description>
              <elementTextContainer>
                <elementText elementTextId="98451">
                  <text>VOL. 13 NO. 4 (2024) DECEMBER 2024</text>
                </elementText>
              </elementTextContainer>
            </element>
          </elementContainer>
        </elementSet>
      </elementSetContainer>
    </collection>
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98482">
                <text>Multi Moving Objects Detection in Video Using Pre-trained Deep Convolutional Neural Networks</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="49">
            <name>Subject</name>
            <description>The topic of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98483">
                <text>Object Tracking; Video processing; Deep neural network; K-mean clustering.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="41">
            <name>Description</name>
            <description>An account of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98484">
                <text>Abstract- Nowadays object tracking is a critical concern in the field of machine vision. With the advent of powerful computers, affordable cameras, and growing demand for automatic video analysis, researchers have shown significant interest in object tracking. Various methods have been proposed for tracking objects in machine vision, but a key challenge remains: ensuring the robustness of tracking algorithms across consecutive video frames. In recent years, deep neural networks have emerged as a promising approach for accurate position estimation. In this study, we propose an enhanced method that combines deep convolutional neural networks with established techniques like K-means clustering. Our approach addresses challenges such as object disappearances and severe displacements. The selection of deep neural networks is motivated by their compatibility with target identification in video sequences, and achieving a remarkably low error rate in tracking validates our claim.</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="39">
            <name>Creator</name>
            <description>An entity primarily responsible for making the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98485">
                <text>Abolfazl Ansaripour, Hosein Mahvash Mohamadi, </text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="48">
            <name>Source</name>
            <description>A related resource from which the described resource is derived</description>
            <elementTextContainer>
              <elementText elementTextId="98486">
                <text>www.ijcit.com</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="40">
            <name>Date</name>
            <description>A point or period of time associated with an event in the lifecycle of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98487">
                <text>December 2024</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="37">
            <name>Contributor</name>
            <description>An entity responsible for making contributions to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98488">
                <text>peri irawan</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="42">
            <name>Format</name>
            <description>The file format, physical medium, or dimensions of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98489">
                <text>pdf</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="44">
            <name>Language</name>
            <description>A language of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98490">
                <text>english</text>
              </elementText>
            </elementTextContainer>
          </element>
          <element elementId="51">
            <name>Type</name>
            <description>The nature or genre of the resource</description>
            <elementTextContainer>
              <elementText elementTextId="98491">
                <text>text</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </item>
</itemContainer>
