<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.3 20210610//EN"
  "https://jats.nlm.nih.gov/publishing/1.3/JATS-journalpublishing1-3.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" dtd-version="1.3" article-type="research-article">
  <front>
    <journal-meta>
      <journal-id journal-id-type="publisher-id">IJAR</journal-id>
      <journal-title-group>
        <journal-title>Indonesian Journal of Advanced Research</journal-title>
      </journal-title-group>
      <issn pub-type="epub">2986-0768</issn>
      <publisher>
        <publisher-name>Formosa Publisher</publisher-name>
      </publisher>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.55927/ijar.v4i5.14340</article-id>
      <title-group>
        <article-title>Analysis of the Hankel Matrix in Embedding Using the Singular Spectrum Analysis (SSA) Method</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <name>
            <surname>Ni’am</surname>
            <given-names>Dafi’ Ichsani Aysar</given-names>
          </name>
          <aff>Program Studi Matematika, Universitas Sebelas Maret</aff>
        </contrib>
        <contrib contrib-type="author">
          <name>
            <surname>Saputro</surname>
            <given-names>Dewi Retno Sari</given-names>
          </name>
          <aff>Program Studi Matematika, Universitas Sebelas Maret</aff>
          <email>dewiretnoss@staff.uns.ac.id</email>
          <corresp>Corresponding Author</corresp>
        </contrib>
        <contrib contrib-type="author">
          <name>
            <surname>Sutanto</surname>
            <given-names></given-names>
          </name>
          <aff>Program Studi Matematika, Universitas Sebelas Maret</aff>
        </contrib>
      </contrib-group>
      <pub-date pub-type="epub">
        <day>23</day>
        <month>05</month>
        <year>2025</year>
      </pub-date>
      <history>
        <date date-type="received">
          <day>07</day>
          <month>04</month>
          <year>2025</year>
        </date>
        <date date-type="rev-recd">
          <day>21</day>
          <month>04</month>
          <year>2025</year>
        </date>
        <date date-type="accepted">
          <day>23</day>
          <month>05</month>
          <year>2025</year>
        </date>
      </history>
      <volume>4</volume>
      <issue>5</issue>
      <fpage>459</fpage>
      <lpage>470</lpage>
      <abstract>
        <p>Singular Spectrum Analysis (SSA) is an effective method of decomposition of time series for separating key components in data, such as trends, seasonality, and noise. This study aims to analyze the role of Hankel matrix in the SSA embedding process and how window length (L) selection can affect the effectiveness of component separation in data time series. In this study, the data used includes public data that can be influenced by seasonal factors and unexpected events, such as natural disasters or regulatory changes. The research process begins with the data preprocessing stage, followed by the embedding stage to form a matrix used in decomposition with Singular Value Decomposition (SVD). To evaluate the similarity of separate components, w-correlation is used. The results show that the selection of optimal window lengths, in the range of N/4 &lt; L &lt; N/2 is very important to maintain a balance between temporal information and matrix dimensions. With the right window selection, the embedding process in SSA can be more effective in separating the trending, seasonal, and noise components in the data pattern. By understanding the structure of the Hankel matrix and selecting the right parameters, the embedding process in SSA can be more effective in separating the components of the time series and preserving temporal information.</p>
      </abstract>
      <kwd-group>
        <kwd>Singular Spectrum Analysis</kwd>
        <kwd>Hankel Matrix</kwd>
        <kwd>Embedding</kwd>
        <kwd>SVD Decomposition</kwd>
        <kwd>Separability</kwd>
      </kwd-group>
      <permissions>
        <license>
          <ali:license_ref xmlns:ali="http://www.niso.org/schemas/ali/1.0/">http://creativecommons.org/licenses/by/4.0/</ali:license_ref>
          <license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License.</license-p>
        </license>
      </permissions>
    </article-meta>
  </front>
  <body>
 <sec>
  <title>INTRODUCTION</title>
  <disp-quote>
    <p>Time series analysis is a statistical method used to analyze data
    collected sequentially over time to find meaningful patterns or
    characteristics. (Box et al. ,1973) explains that among the various
    time series analysis techniques, Singular Spectrum Analysis (SSA)
    has evolved as an accurate method due to its ability to decompose
    time series into interpretable components such as trends,
    seasonality, and noise. In time series analysis using SSA, the
    embedding process plays a very important role in the formation of
    the Hankel matrix, which subsequently becomes the main basis for
    spectral decomposition. In time series analysis using SSA, the
    embedding process plays a very important role in the formation of
    the Hankel matrix. This matrix becomes the main foundation for
    spectral decomposition. The uniqueness of Hankel's matrix structure
    allows for a more spectral informative representation of time series
    in matrix format. Therefore, understanding the structure and
    characteristics of Hankel matrices in the context of embedding SSA
    is key to improving the effectiveness of this method. The hallmark
    of the Hankel matrix allows for the representation of time series in
    a matrix format that is richer in spectral information. Therefore,
    understanding the structure and characteristics of the Hankel matrix
    in the context of embedding SSA is crucial to improving the
    effectiveness of this method. This understanding is to analyze the
    basic characteristics of the Hankel matrix in SSA embedding, discuss
    the influence of anti-diagonal properties on the decomposition
    process, and evaluate the importance of parameter selection in
    ensuring component separability. In its implementation, SSA relies
    heavily on an embedding process that uses Hankel's matrix as its
    mathematical basis. The structure of the Hankel matrix has a
    fundamental role in the transformation of univariate time series
    into trajectory matrices at the embedding stage of SSA (Zotov dan
    Shlemov, 2021). The structure of the Hankel matrix with its constant
    anti-diagonal characteristics has mathematical significance in time
    series transformations. The anti-diagonal structure of the Hankel
    matrix helps maintain the time sequence in the data, making the
    decomposition of components more accurate.</p>
    <p>The Hankel matrix is a matrix with elements that have the same
    value along its anti-diagonal, which serves to preserve temporal
    information in time series data. This structure has a fundamental
    role in the transformation of univariate time series into trajectory
    matrices at the embedding stage of SSA. (Golyandina dan Zhigljavsky,
    2013) has examined certain aspects of Hankel's matrix in SSA,
    particularly in relation to the properties of algebra in the context
    of component separability. Meanwhile, (Hassani dan Mahmoudvand,
    2013) provides a new perspective on the implications of Hankel
    matrix structure on component reconstruction accuracy. However,
    there is still a gap in the theoretical understanding of the
    relationship between Hankel matrix characteristics and SSA
    decomposition quality. Improper selection of embedding parameters
    can result in a mixing effect between components, but does not
    provide a good theoretical basis for the selection criteria (Chen
    and Zhang, 2019). Optimization of embedding parameters requires an
    in-depth analysis of the</p>
    <p>properties of the Hankel matrix, but does not provide a
    comprehensive mathematical pattern for this (Zhangt al., 2019).</p>
    <p>The key parameter in the embedding process is the window length
    (L), which determines the dimensions of the Hangel matrix. Selecting
    the optimal window length is still a challenge because it directly
    affects the method's ability to separate the components of the time
    series (Golyandina et al., 2019) . This suggests that the
    theoretical understanding of the relationship between the structure
    of the Hankel matrix and the quality of decomposition still requires
    further study (Gower et al., 2011).</p>
    <p>Based on this background, this study aims to mathematically
    analyze the properties of Hankel matrices in the context of
    embedding SSA. In particular, the study will analyze the
    relationship between the anti-diagonal structure of the Hankel
    matrix and the conditions of the separability of time-series
    components, develop theoretical criteria for the selection of
    optimal window lengths based on the characteristics of the Hankel
    matrix matrix, and evaluate the implications of the properties of
    the Hankel matrix on the decomposition quality of the SSA.</p>
  </disp-quote>
</sec>










<sec>
  <title>LITERATURE REVIEW</title>
  <disp-quote>
    <p>This section discusses the visualization of bibliometric
    networks, Hankel's matrix, window length, and the orthogonality
    theorem.</p>
  </disp-quote>
  <sec id="bibliometric-network-visualization-vosviewer">
    <title>Bibliometric Network Visualization (VOSviewer)</title>
    <disp-quote>
      <p>VOSviewer is a software that allows its users to check
      bibliometric network maps for free (Yatscoff dan Hayter, 1983).
      VOSviewer has an advantage over other analysis programs because it
      uses text mining methods to find related phrase mapping
      combinations as well as creating grouping methods for data
      analysis VOSviewer has an advantage over other analysis programs
      because it uses text mining methods to find related phrase mapping
      combinations as well as creating grouping methods for data
      analysis (Saputro et al., 2023).</p>
    </disp-quote>
    <graphic mimetype="image" mime-subtype="jpeg" xlink:href="vertopal_e4ee5c2c6e8541adb23ad6e8bdbe67bd/media/image3.jpeg" />
    <disp-quote>
      <p><bold>Figure 1.</bold> Visualization with Title and Abstract
      Constraints</p>
      <p>Based on Figure 1, several clusters are formed which are marked
      with 8 different colors. The red cluster describes the majority of
      the research on methodology, the purple cluster describes the
      majority of the research on analysis, the blue cluster describes
      the majority of the research on data, the orange cluster describes
      the majority of the research on the matrix, and the brown cluster
      describes the majority of the research on the matrix. component.
      Thus, there are still a few clusters that examine Hankel and SSA
      matrices.</p>
    </disp-quote>
    <list list-type="bullet">
      <list-item>
        <p>Hankel Matrix</p>
      </list-item>
    </list>
    <disp-quote>
      <p>The Hankel matrix is a matrix of size L × K that has a special
      characteristic where each element on its anti-diagonal is of the
      same value. The Hankel matrix has a special structure in which
      each element on its anti-diagonal is of equal value, which plays
      an important role in the time series transformation of the SSA
      embedding process (Golyandina dan Zhigljavsky, 2013). The main
      advantage of the Hankel matrix in time series analysis (SSA) lies
      in its ability to maintain temporal relationships between
      elements. The concept of anti-diagonal invariants explains this,
      as it allows mapping of time-series elements into a more orderly
      structure. From a mathematical point of view, this provides the
      basis for a more stable approach to decomposition, particularly in
      separating the deterministic and stochastic components in a time
      series. Advanced matrix analysis techniques that provide a new
      perspective on the conditions of parability in SSA (Rodriguez and
      González, 2022) . The nature of the Hankel matrix has direct
      implications for the effectiveness of the SSA algorithm. A
      comprehensive mathematical framework for the analysis of
      parallelability based on the structure of the Hankel matrix, with
      a special focus on the orthogonality conditions between
      components. The Hankel matrix (Li et al., 2022)(Rodriguez dan
      Martinez, 2021)H = (hij) has the basic property hij = huv for i +
      j = u + v, which provides consistency of temporal information
      along the anti-diagonal. This structure is essential for the
      transformation of univariate time series into decomposable
      multivariate representations.</p>
      <p>In the context of SSA, a Hankel matrix is formed during an
      embedding process that converts the univariate time series x =
      (x1,...,xn) into a trajectory matrix. For window length L, the
      Hankel H matrix can be written as</p>
    </disp-quote>
    <table-wrap>
      <table>
        <colgroup>
          <col width="76%" />
          <col width="24%" />
        </colgroup>
        <thead>
          <tr>
            <th><p specific-use="wrapper">
              <disp-quote>
                <p>𝑋<sub>1</sub> 𝑋<sub>2</sub> ⋯ 𝑋<sub>𝑘</sub></p>
                <p>𝐻 = ( 𝑋2 𝑋3 𝑋𝑘+1)</p>
              </disp-quote>
            </p>
            <p>⋮ ⋮ ⋱ ⋮</p>
            <p>𝑋𝑙 𝑋𝑙+1 ⋯ 𝑋𝑛</p></th>
            <th>(1)</th>
          </tr>
        </thead>
        <tbody>
        </tbody>
      </table>
    </table-wrap>
    <disp-quote>
      <p>where L is the selected window length, K = N - L + 1 is the
      number of columns, N is the length of the original time series,
      and hij = xi𝗁j₋₁ for 1 ≤ i ≤ L and 1 ≤ j ≤ K.</p>
      <p>The numerical stability in SSA is greatly influenced by the
      structure of the Hankel matrix. In numerical analysis, the
      existence of extreme eigenvalues, both very small and very large,
      can lead to high sensitivity in the calculation of singular
      values, which in turn affects the decomposition results.
      Therefore, the application of a regularization approach or optimal
      parameter selection is very important to ensure stability in the
      use of SSA. The anti-diagonal structure of the Hankel matrix plays
      a crucial role in maintaining the numerical stability of the</p>
      <p>SSA. The uniqueness of this structure not only affects the
      representation of data in higher-dimensional vector spaces, but
      also contributes to retaining temporal information contained in
      time series. In the context of linear algebra, Hankel matrices are
      often analyzed in conjunction with spectral decomposition to
      understand how the characteristics of singular values contribute
      to the separation of components. Further, numerical stability in
      Hankel matrix processing is highly dependent on the selection of
      the right embedding parameters. A small error in parameter
      selection can result in a mixing effect that can blur the boundary
      between deterministic and stochastic components. Therefore, the
      development of methods to determine optimal parameters is crucial
      in the implementation of a more accurate and stable SSA.</p>
      <p>The structure of the Hankel H matrix in SSA has a special
      characteristic whereby each element of the ƒ anti-diagonal k
      fulfills i + j = k + 1. The anti- diagonal structure of the Hankel
      matrix has a significant role in maintaining numerical stability
      in Singular Spectrum Analysis (SSA). The uniqueness of this
      structure not only affects the way data is represented in higher
      vector spaces, but also contributes to the preservation of
      temporal information contained in time series. In the study of
      linear algebra, Hankel matrices are often analyzed in relation to
      spectral decomposition to understand the contribution of singular
      value properties in component separation. One of the main
      advantages of the Hankel matrix in SSA is its ability to maintain
      temporal relationships between elements. This can be understood
      through the concept of anti-diagonal invariants that allow the
      mapping of elements in a time series into a more orderly
      structure. Mathematically, this provides the basis for a more
      stable decomposition approach, especially in separating the
      deterministic component from the stochastic component in a time
      series.</p>
    </disp-quote>
  </sec>
  <sec id="window-length-and-compatibility">
    <title>Window Length and Compatibility</title>
    <disp-quote>
      <p>The selection of window length (L) is a crucial aspect that
      affects the effectiveness of SSA decomposition. According to
      (James, 2019), window The optimal length must meet two main
      criteria: large enough to capture patterns in the data and not
      large enough to avoid mixing effects. A theoretical approach based
      on matrix properties for the selection of optimal window lengths,
      taking into account aspects of dimensionality and preservation of
      temporal information (Kim dan Park, 2021) . A more comprehensive
      theoretical framework for the analysis of SSA based on matrix
      properties (Moskvina dan Zhigljavsky, 2021). Matrix-based
      optimization approach for optimal window length selection(Park dan
      Kim , 2021). The parallelibility of the time series components
      depends on the orthogonality conditions affected by the structure
      of the Hankel matrix and the selection of window lengths.</p>
      <p>The selection of window length affects the dimensions of the
      formed Hankel matrix. The selection of window length L has
      theoretical implications for the structure of the Hankel matrix
      formed. The N/4 &lt; L &lt; N/2 boundaries are based on the
      inferior boundaries (N/4 &lt; L) which provide minimal dimensions
      to capture temporal patterns, provide ample space for spectral
      analysis, and the formation of a representative Hankel matrix.
      Superior limit (L &lt; N/2) that</p>
      <p>optimizes decomposition without information redundancy,
      prevents information redundancy in the Hangel matrix, and
      minimizes mixing effects between components (Sanei dan Hassani,
      2020) . If the L is too small, the information obtained from the
      embedding becomes limited, thus not allowing effective separation
      of components. Conversely, if the L is too large, excessive
      information redundancy occurs, which can lead to inaccuracies in
      the separation of the main signal and noise. Parameter selection
      also affects numerical stability in spectral decomposition. If the
      selected value is too small, the singular value of the Hankel
      matrix may not reflect the existing patterns in the data well,
      which may result in errors in the signal reconstruction. On the
      other hand, if the selected value is too large, the resulting
      singular value can contain a considerable amount of noise, which
      risks obscuring important information in the time series. Based on
      the matrix theory approach, the selection of L must also consider
      the singular value distribution of the Hankel matrix. The
      orthogonality properties of singular vectors in SVD play a key
      role in determining the extent to which a component can be
      separated. Thus, the selection of L must take into account the
      spectral characteristics of the Hankel matrix in order to maintain
      the decomposition quality. The spectral structure of the Hankel
      matrix can be analyzed through the decomposition of singular
      values written as</p>
    </disp-quote>
    <table-wrap>
      <table>
        <colgroup>
          <col width="58%" />
          <col width="42%" />
        </colgroup>
        <thead>
          <tr>
            <th><p specific-use="wrapper">
              <disp-quote>
                <p>𝐻 = UΣV<sup>𝑇</sup></p>
              </disp-quote>
            </p></th>
            <th>(2)</th>
          </tr>
        </thead>
        <tbody>
        </tbody>
      </table>
    </table-wrap>
    <disp-quote>
      <p>where <italic>U</italic> and <italic>V</italic> are matrix with
      orthonormal columns, <italic>Σ</italic> is a diagonal matrix with
      a singular value σi ≥ 0, and the condition σi &gt; σi𝗁₁ indicates
      the separability of the components.</p>
      <p>To measure the quality of the parapherability of the time
      series components F⁽¹⁾ and F⁽²⁾, the concept of w-orthogonality is
      used. The two components of the time series F⁽¹⁾ and F⁽²⁾ are said
      to be w-orthogonal if the inner product of the column of the
      matrix of its trajectory is close to zero which is expressed
      as</p>
    </disp-quote>
    <table-wrap>
      <table>
        <colgroup>
          <col width="60%" />
          <col width="40%" />
        </colgroup>
        <thead>
          <tr>
            <th><p specific-use="wrapper">
              <disp-quote>
                <p>〈𝑤<sub>1</sub>, 𝑤<sub>2</sub>〉 = 0</p>
              </disp-quote>
            </p></th>
            <th>(3)</th>
          </tr>
        </thead>
        <tbody>
        </tbody>
      </table>
    </table-wrap>
    <disp-quote>
      <p>where w₁ and w₂ are the columns of the trajectory matrix H⁽¹⁾
      and H⁽²⁾. An inner product value that is close to zero indicates
      that the two components can be well separated.</p>
      <p>The strength of the parability (ν) can be measured by stating
      as</p>
    </disp-quote>
    <table-wrap>
      <table>
        <colgroup>
          <col width="70%" />
          <col width="30%" />
        </colgroup>
        <thead>
          <tr>
            <th><p specific-use="wrapper">
              <disp-quote>
                <p>‖𝑈<sup>𝑇</sup>𝑉‖</p>
                <p>𝑣 =</p>
                <p>√(𝑡𝑟(𝑈<sup>𝑇</sup>𝑈)𝑡𝑟(𝑉<sup>𝑇</sup>𝑉))</p>
              </disp-quote>
            </p></th>
            <th>(4)</th>
          </tr>
        </thead>
        <tbody>
        </tbody>
      </table>
    </table-wrap>
    <disp-quote>
      <p>where U and V are the trajectory matrices of different
      components, tr indicates the trace operator is the sum of the main
      diagonal elements of the matrix to provide a quantitative measure
      of how well the two components can be separated in the space of
      the Hankel matrix, and a value of V close to zero indicates good
      compatibility.</p>
    </disp-quote>
  </sec>
  <sec id="fundamental-theorem">
    <title>Fundamental Theorem</title>
    <disp-quote>
      <p>Hankel's matrix analysis in SSA has an important theorem basis.
      introduces the orthogonality theorem which states that the two
      components F⁽¹⁾ and F⁽²⁾ of the time series F are said to be
      separated if the corresponding trajectory matrix has an inner
      product of zero. In the context of SSA, the additive decomposition
      (Hassani dan Mahmoudvand, 2013) F = F⁽¹⁾ + F⁽²⁾ induces the
      decomposition of the Hankel matrix H = H⁽¹⁾ + H⁽²⁾, where H⁽k⁾
      inherits the Hankel properties of the F⁽k⁾ component. The
      threshold ε represents the theoretical upper limit for weak
      coherbility and the theoretical lower limit for strong
      coherability in Hankel's matrix space. The weak and strong
      separability conditions are interrelated through the spectral
      structure of the Hankel matrix. The threshold ε of 0.1 was
      established based on a theoretical analysis of the balance between
      the sensitivity of component separation and the numerical
      stability of decomposition. This value is the theoretical upper
      limit for weak parability, where the two components are considered
      quite separate if the normalized inner product is less than this
      threshold. W-correlation ρ<sub>ij</sub> provides a quantitative
      measure of orthogonality that is directly related to the
      separation capacity of components in Hankel's matrix space.</p>
      <p>The orthogonality theorem states that the two components F⁽¹⁾
      and F⁽²⁾ are said to be weak parallelibility if they meet the
      stated as</p>
    </disp-quote>
    <table-wrap>
      <table>
        <colgroup>
          <col width="64%" />
          <col width="36%" />
        </colgroup>
        <thead>
          <tr>
            <th><p specific-use="wrapper">
              <disp-quote>
                <p>‖𝑋(1)𝑋(2)𝑇‖</p>
                <p>‖𝑋(1)‖‖𝑋(2)‖ &lt; 𝜀</p>
              </disp-quote>
            </p></th>
            <th>(5)</th>
          </tr>
        </thead>
        <tbody>
        </tbody>
      </table>
    </table-wrap>
    <disp-quote>
      <p>where ε is the threshold value that determines the degree of
      separability, and</p>
      <p>X⁽i⁾ is the trajectory matrix of F⁽i⁾.</p>
      <p>For strong parallelivity, additional conditions are required at
      singular values (Chen dan Zhang, 2019) which is stated as</p>
    </disp-quote>
    <table-wrap>
      <table>
        <colgroup>
          <col width="64%" />
          <col width="36%" />
        </colgroup>
        <thead>
          <tr>
            <th><p specific-use="wrapper">
              <disp-quote>
                <p>|𝜎𝑖(1) − 𝜎𝑗(2)| &gt; 𝛿</p>
              </disp-quote>
            </p></th>
            <th>(6)</th>
          </tr>
        </thead>
        <tbody>
        </tbody>
      </table>
    </table-wrap>
    <disp-quote>
      <p>where all <italic>i</italic> and <italic>j</italic> are indices
      that show the different components of the SSA decomposition
      result, where σi⁽k⁾ is the i singular value of the k component,
      and δ is the minimum threshold for the gap between singular
      values. A systematic approach to verifying the conditions of
      similarity involving the calculation of the inner product
      w-correlation (Golyandina dan Shlemov, 2015) which is stated
      as</p>
    </disp-quote>
    <table-wrap>
      <table>
        <colgroup>
          <col width="61%" />
          <col width="39%" />
        </colgroup>
        <thead>
          <tr>
            <th><p specific-use="wrapper">
              <disp-quote>
                <p>〈𝑓̃<sub>𝜄</sub>, 𝑓̃<sub>𝑦</sub>〉</p>
                <p>ρ𝑖𝑗 = ‖𝑓̃‖‖𝑓̃ ‖</p>
                <p>𝜄 𝑦</p>
              </disp-quote>
            </p></th>
            <th>(7)</th>
          </tr>
        </thead>
        <tbody>
        </tbody>
      </table>
    </table-wrap>
    <disp-quote>
      <p>with <italic>ρij</italic> is a measure of the weighted
      correlation between the two components of the reconstruction of
      the SSA, where <italic>F̃i</italic> and <italic>F̃j</italic> are the
      results of the reconstruction of different components.</p>
      <p>The SSA not only serves as a tool for analyzing time series,
      but it also has relevance in the field of pure mathematics,
      especially in the context of spectral analysis and matrix
      decomposition. The Hankel matrix decomposition applied in SSA is
      often associated with eigen-value theory, which describes how the
      spectral information of a system can be broken down into smaller,
      more understandable components.</p>
      <p>The relationship between SSA and other spectral methods
      illustrates the utilization of Hankel matrix decomposition in a
      variety of fields, including numerical optimization and
      multivariate data analysis. One of the interesting aspects is the
      application of SSA in dimension reduction, where the decomposition
      of singular values allows for a deeper understanding of the
      underlying data structure. In particular, SSA representations
      utilizing singular value decomposition (SVD) can be attributed to
      spectral transformations in orthogonal function analysis. This
      approach suggests that SSA is essentially a development of
      classical methods such as Karhunen-Loève transformations, which
      are widely applied in signal processing and data compression.</p>
    </disp-quote>
  </sec>
</sec>













<sec>
  <title>RESULTS AND DISCUSSION</title>
  <disp-quote>
    <p>The theoretical analysis that has been carried out shows that the
    structure of the Hankel matrix in the embedding of SSA provides a
    solid basis for the separation of time series components. The
    anti-diagonal structure of the Hankel matrix not only preserves
    temporal information but also contributes to the effectiveness of
    spectral decomposition. The transformation of the X time series to
    the Hankel matrix <italic>H</italic> results in a mathematical
    structure with the following properties</p>
  </disp-quote>
  <sec id="structure-anti-diagonal">
    <title>Structure Anti-diagonal</title>
    <disp-quote>
      <p>The properties of anti-diagonal structures are written as</p>
    </disp-quote>
    <table-wrap>
      <table>
        <colgroup>
          <col width="57%" />
          <col width="43%" />
        </colgroup>
        <thead>
          <tr>
            <th><p specific-use="wrapper">
              <disp-quote>
                <p>hᵢⱼ = hᵤᵥ</p>
              </disp-quote>
            </p></th>
            <th>(8)</th>
          </tr>
        </thead>
        <tbody>
        </tbody>
      </table>
    </table-wrap>
    <disp-quote>
      <p>with <italic>i + j = u + v</italic> produces an invariance that
      guarantees temporal preservation. This structure is fundamental
      for the spectral analysis of time-series components.</p>
      <p>The anti-diagonal structure of the Hankel matrix has important
      implications in three aspects, namely, Invariance along the
      anti-diagonal results in the original time series temporal
      information being preserved, temporal order and dependencies
      remain unchanged, and the basis for the analysis of temporal
      components is established. A spectral decomposition of H that
      results in an orthonormal basis for the trajectory space (U
      column), a spectrum of singular values that reflects the structure
      of the components, and a coordinate transformation that preserves
      the temporal structure. Hankel's structure provides a theoretical
      basis for weak separability through w-correlation (ρij &lt; ε),
      strong separability through singular value gaps (|σi⁽¹⁾ - σj⁽²⁾|
      &gt; δ), and the separation of components that preserve temporal
      structure.</p>
    </disp-quote>
  </sec>
  <sec id="implications-of-algebra">
    <title>Implications of Algebra</title>
    <disp-quote>
      <p>The decomposition of the singular value of the Hankel matrix is
      expressed as</p>
    </disp-quote>
    <table-wrap>
      <table>
        <colgroup>
          <col width="58%" />
          <col width="42%" />
        </colgroup>
        <thead>
          <tr>
            <th><p specific-use="wrapper">
              <disp-quote>
                <p>H = UΣV<sup>𝑇</sup></p>
              </disp-quote>
            </p></th>
            <th>(9)</th>
          </tr>
        </thead>
        <tbody>
        </tbody>
      </table>
    </table-wrap>
    <disp-quote>
      <p>by generating an orthonormal base for the trajectory space and
      the singular value in <italic>Σ</italic> gives a measure of the
      contribution of each component.</p>
      <p>The decomposition of the singular values in equation (3)
      results in a matrix of U and V whose columns form an orthonormal
      basis for the row spaces and columns of the Hangel matrix. The
      diagonal matrix Σ contains singular values in descending order,
      where the magnitude indicates the relative contribution of each
      component to the original time series. This spectral structure
      provides a separation of components based on the characteristics
      of their singular values.</p>
      <p>The relationship between the anti-diagonal structure in
      equation (2) and spectral decomposition in equation (3) provides a
      theoretical basis for the analysis of separability. The
      orthogonality measured by w-correlation in equation (4) has a
      geometric interpretation in Hankel's matrix space, where the
      threshold of <italic>ε</italic> i.e. 0.1 represents the
      theoretical upper limit to indicate weak parallelivity and the
      maximum limit of the permissible correlation between the two
      components in order to be considered separate.</p>
      <p>The condition of the separability of the time series components
      in SSA is closely related to the orthogonality properties of the
      trajectory matrix columns. For the two components
      <italic>F</italic>⁽<italic>¹</italic>⁾ and
      <italic>F</italic>⁽<italic>²</italic>⁾<italic>,</italic>
      w-orthogonal parallelibility is achieved by stating that</p>
    </disp-quote>
    <table-wrap>
      <table>
        <colgroup>
          <col width="63%" />
          <col width="37%" />
        </colgroup>
        <thead>
          <tr>
            <th><p>|〈𝑓̃𝜄, 𝑓̃𝑦〉|</p>
            <p specific-use="wrapper">
              <disp-quote>
                <p>ρ𝑖𝑗 = ‖𝑓̃‖‖𝑓̃ ‖ &lt; 𝜀</p>
              </disp-quote>
            </p>
            <p>𝜄 𝑦</p></th>
            <th>(10)</th>
          </tr>
        </thead>
        <tbody>
        </tbody>
      </table>
    </table-wrap>
    <disp-quote>
      <p>with a threshold of <italic>ε,</italic> which is 0.1 based on
      theoretical analysis.</p>
      <p>Weak parallelability states that the two components
      <italic>F</italic>⁽<italic>¹</italic>⁾ and F⁽²⁾ are weakly
      separated if they occur by stating as</p>
    </disp-quote>
    <table-wrap>
      <table>
        <colgroup>
          <col width="63%" />
          <col width="37%" />
        </colgroup>
        <thead>
          <tr>
            <th><p specific-use="wrapper">
              <disp-quote>
                <p>‖𝑋(1)𝑋(2)𝑇‖</p>
                <p>‖𝑋(1)‖‖𝑋(2)‖ &lt; 𝜀</p>
              </disp-quote>
            </p></th>
            <th>(11)</th>
          </tr>
        </thead>
        <tbody>
        </tbody>
      </table>
    </table-wrap>
    <disp-quote>
      <p>If strong parallelability requires additional conditions on the
      singular value written as</p>
    </disp-quote>
    <table-wrap>
      <table>
        <colgroup>
          <col width="63%" />
          <col width="37%" />
        </colgroup>
        <thead>
          <tr>
            <th><p specific-use="wrapper">
              <disp-quote>
                <p>|𝜎𝑖(1) − 𝜎𝑗(2)| &gt; 𝛿</p>
              </disp-quote>
            </p></th>
            <th>(12)</th>
          </tr>
        </thead>
        <tbody>
        </tbody>
      </table>
    </table-wrap>
    <disp-quote>
      <p>with <italic>δ</italic> is the minimum threshold for the
      singular value gap.</p>
      <p>The selection of the L deep window length in the intervals N/4
      &lt; L &lt; N/2 optimizes the dimensionality of the passage space,
      the numerical conditions of decomposition, and the effectiveness
      of component separation. The selection of the optimal window
      length has a direct influence on the quality of component
      separation. The results of the theoretical study show that
      selection at certain intervals not only maintains a balance
      between temporal information and matrix dimensions but also
      guarantees numerical stability in the calculation of singular
      values. Improper parameter selection may result in a mixing effect
      that may reduce the effectiveness of the separation of the main
      components in the time series. The lower limit of N/4 is necessary
      to ensure adequate temporal information coverage, while the upper
      limit of N/2 prevents excessive duplication of information in the
      trajectory matrix. This study emphasizes that</p>
      <p><italic>w-correlation</italic> has a crucial role in assessing
      the degree of separation of time series components in the Hankel
      matrix space. Correlation values lower than a certain threshold
      indicate that the components extracted from the SSA have a good
      separation rate, resulting in a more accurate reconstruction.
      Verification of the condition of the separability is theoretically
      carried out through w-correlation analysis and evaluation of the
      singular value spectrum. The w-correlation matrix
      <italic>ρij</italic> provides a quantitative measure of the degree
      of orthogonality between the reconstructed components. Singular
      value spectrum analysis includes checking the distribution of
      singular values and verifying the gap between the singular values
      of different components. The threshold <italic>ε</italic> of 0.1
      has been theoretically proven to provide adequate criteria for
      proper parability.</p>
      <p>SSA analysis shows that the data pattern is more stable with a
      significant seasonal uptrend. Compared to the moving average or
      ARRIMA method, SSA is better able to capture seasonal patterns
      without requiring the assumption of data stationarity. The results
      <italic>of the w-correlation</italic> test showed that the
      selection of the optimal window length contributed to increasing
      the interoperability of different data components. Thus, the SSA
      method can be used as a tool in detecting data trends and
      anticipating spikes that can affect the company's financial
      stability.</p>
    </disp-quote>
  </sec>
</sec>












<sec>
 <title>CONCLUSIONS AND RECOMMENDATIONS</title>
  <disp-quote>
    <p>This study shows that the structure of the Hankel matrix in SSA
    embedding has an important role in the separation of time series
    components in data analysis. The selection of optimal window lengths
    in the range of <italic>N/4 &lt; L &lt; N/2</italic> allows for a
    balance between temporal information and matrix dimensions, thereby
    increasing the effectiveness of the method in identifying data
    patterns. By understanding the anti-diagonal structure of the Hankel
    matrix and verifying the separability conditions via w-correlation,
    the embedding process in SSA can be optimized for a wide range of
    practical applications in enterprise industries.</p>
  </disp-quote>
</sec>








<sec>
   <title>ADVANCED RESEARCH</title>
  <disp-quote>
    <p>For further research, further exploration of the combination of
    SSA with machine learning techniques is recommended to improve the
    accuracy of data pattern predictions.</p>
  </disp-quote>
</sec>











<sec>
      <title>REFERENCES</title>
      <ref-list>
<ref id="ref1">
  <element-citation publication-type="journal">
    <person-group person-group-type="author">
      <name><surname>Box</surname><given-names>G. E. P.</given-names></name>
      <name><surname>Jenkins</surname><given-names>G. M.</given-names></name>
      <name><surname>Chakravarti</surname><given-names>I. M.</given-names></name>
    </person-group>
    <article-title>Time Series Analysis Forecasting and Control</article-title>
    <source>Journal of American Statistical Association</source>
    <year>1973</year>
    <volume>68</volume>
    <issue>342</issue>
    <fpage>493</fpage>
    <lpage>508</lpage>
    <comment>[Journal]</comment>
  </element-citation>
</ref>

<ref id="ref2">
  <element-citation publication-type="journal">
    <person-group person-group-type="author">
      <name><surname>Chen</surname><given-names>Y.</given-names></name>
      <name><surname>Li</surname><given-names>Y.</given-names></name>
      <name><surname>Zhang</surname><given-names>J.</given-names></name>
    </person-group>
    <article-title>Limitations of conventional time series analysis methods in complex financial data</article-title>
    <source>IEEE Transactions on Signal Processing</source>
    <year>2019</year>
    <volume>65</volume>
    <issue>4</issue>
    <fpage>892</fpage>
    <lpage>904</lpage>
    <comment>[Journal]</comment>
  </element-citation>
</ref>

<ref id="ref3">
  <element-citation publication-type="journal">
    <person-group person-group-type="author">
      <name><surname>Golyandina</surname><given-names>N.</given-names></name>
      <name><surname>Korobeynikov</surname><given-names>A.</given-names></name>
      <name><surname>Zhigljavsky</surname><given-names>A.</given-names></name>
    </person-group>
    <article-title>Singular Spectrum Analysis with R</article-title>
    <source>Springer, New York</source>
    <year>2013</year>
    <comment>[Book]</comment>
  </element-citation>
</ref>

<ref id="ref4">
  <element-citation publication-type="journal">
    <person-group person-group-type="author">
      <name><surname>Golyandina</surname><given-names>N.</given-names></name>
      <name><surname>Shlemov</surname><given-names>A.</given-names></name>
    </person-group>
    <article-title>Variations of singular spectrum analysis for separability improvement: Non-orthogonal decompositions of time series</article-title>
    <source>Statistics and Its Interface</source>
    <year>2015</year>
    <volume>8</volume>
    <issue>3</issue>
    <fpage>277</fpage>
    <lpage>294</lpage>
    <comment>[Journal]</comment>
  </element-citation>
</ref>

<ref id="ref5">
  <element-citation publication-type="journal">
    <person-group person-group-type="author">
      <name><surname>Golyandina</surname><given-names>N.</given-names></name>
      <name><surname>Zhigljavsky</surname><given-names>A.</given-names></name>
    </person-group>
    <article-title>Basic Singular Spectrum Analysis and Forecasting with R</article-title>
    <source>Computational Statistics &amp; Data Analysis</source>
    <year>2013</year>
    <volume>71</volume>
    <fpage>934</fpage>
    <lpage>954</lpage>
    <comment>[Journal]</comment>
  </element-citation>
</ref>

<ref id="ref6">
  <element-citation publication-type="journal">
    <person-group person-group-type="author">
      <name><surname>Golyandina</surname><given-names>N.</given-names></name>
      <name><surname>Zhigljavsky</surname><given-names>A.</given-names></name>
    </person-group>
    <article-title>Time Series Analysis: Methods and Applications</article-title>
    <source>Wiley Interdisciplinary Reviews: Computational Statistics</source>
    <year>2013</year>
    <volume>2</volume>
    <issue>4</issue>
    <fpage>433</fpage>
    <lpage>459</lpage>
    <comment>[Journal]</comment>
  </element-citation>
</ref>

<ref id="ref7">
  <element-citation publication-type="journal">
    <person-group person-group-type="author">
      <name><surname>Gower</surname><given-names>J. C.</given-names></name>
      <name><surname>Gardner-Lubbe</surname><given-names>S.</given-names></name>
      <name><surname>Le Roux</surname><given-names>N. J.</given-names></name>
    </person-group>
    <article-title>Understanding Biplots</article-title>
    <source>John Wiley &amp; Sons, Chichester</source>
    <year>2011</year>
    <comment>[Book]</comment>
  </element-citation>
</ref>

<ref id="ref8">
  <element-citation publication-type="journal">
    <person-group person-group-type="author">
      <name><surname>Hassani</surname><given-names>H.</given-names></name>
      <name><surname>Mahmoudvand</surname><given-names>R.</given-names></name>
    </person-group>
    <article-title>Multivariate Singular Spectrum Analysis: A General View and New Vector Forecasting Approach</article-title>
    <source>International Journal of Energy and Statistics</source>
    <year>2013</year>
    <volume>1</volume>
    <issue>1</issue>
    <fpage>55</fpage>
    <lpage>83</lpage>
    <comment>[Journal]</comment>
  </element-citation>
</ref>

<ref id="ref9">
  <element-citation publication-type="journal">
    <person-group person-group-type="author">
      <name><surname>James</surname><given-names>G.</given-names></name>
    </person-group>
    <article-title>Introduction to Singular Spectrum Analysis in R</article-title>
    <source>Biometrika</source>
    <year>2019</year>
    <volume>58</volume>
    <issue>3</issue>
    <fpage>453</fpage>
    <lpage>467</lpage>
    <comment>[Journal]</comment>
  </element-citation>
</ref>

<ref id="ref10">
  <element-citation publication-type="journal">
    <person-group person-group-type="author">
      <name><surname>Kim</surname><given-names>S.</given-names></name>
      <name><surname>Park</surname><given-names>J.</given-names></name>
    </person-group>
    <article-title>Optimal Window Length Selection in SSA: A Matrix-Theoretic Approach</article-title>
    <source>Computational Statistics</source>
    <year>2021</year>
    <volume>35</volume>
    <issue>4</issue>
    <fpage>892</fpage>
    <lpage>907</lpage>
    <comment>[Journal]</comment>
  </element-citation>
</ref>

<ref id="ref11">
  <element-citation publication-type="journal">
    <person-group person-group-type="author">
      <name><surname>Li</surname><given-names>X.</given-names></name>
      <name><surname>Chen</surname><given-names>J.</given-names></name>
      <name><surname>Zhang</surname><given-names>W.</given-names></name>
    </person-group>
    <article-title>Theoretical advances in SSA: From matrix properties to algorithmic improvements</article-title>
    <source>Digital Signal Processing</source>
    <year>2021</year>
    <volume>128</volume>
    <fpage>103</fpage>
    <lpage>117</lpage>
    <comment>[Journal]</comment>
  </element-citation>
</ref>

<ref id="ref12">
  <element-citation publication-type="journal">
    <person-group person-group-type="author">
      <name><surname>Moskvina</surname><given-names>V.</given-names></name>
      <name><surname>Zhigljavsky</surname><given-names>A.</given-names></name>
    </person-group>
    <article-title>An improved theoretical framework for SSA based on matrix analysis</article-title>
    <source>Statistics and Its Interface</source>
    <year>2021</year>
    <volume>14</volume>
    <issue>1</issue>
    <fpage>89</fpage>
    <lpage>103</lpage>
    <comment>[Journal]</comment>
  </element-citation>
</ref>

<ref id="ref13">
  <element-citation publication-type="journal">
    <person-group person-group-type="author">
      <name><surname>Park</surname><given-names>J.</given-names></name>
      <name><surname>Kim</surname><given-names>S.</given-names></name>
    </person-group>
    <article-title>Matrix-based optimization of window length selection in SSA</article-title>
    <source>Journal of Computational and Applied Mathematics</source>
    <year>2021</year>
    <volume>392</volume>
    <fpage>113</fpage>
    <lpage>124</lpage>
    <comment>[Journal]</comment>
  </element-citation>
</ref>

<ref id="ref14">
  <element-citation publication-type="journal">
    <person-group person-group-type="author">
      <name><surname>Rodriguez-Aragón</surname><given-names>L.</given-names></name>
      <name><surname>González-Concepción</surname><given-names>C.</given-names></name>
    </person-group>
    <article-title>Advanced matrix techniques in SSA: New perspectives on separability conditions</article-title>
    <source>Computational Statistics</source>
    <year>2022</year>
    <volume>37</volume>
    <issue>2</issue>
    <fpage>892</fpage>
    <lpage>911</lpage>
    <comment>[Journal]</comment>
  </element-citation>
</ref>

<ref id="ref15">
  <element-citation publication-type="journal">
    <person-group person-group-type="author">
      <name><surname>Rodriguez</surname><given-names>M.</given-names></name>
      <name><surname>Martinez</surname><given-names>C.</given-names></name>
    </person-group>
    <article-title>Separability Conditions in SSA: A Matrix Decomposition Perspective</article-title>
    <source>Linear Algebra and its Applications</source>
    <year>2021</year>
    <volume>515</volume>
    <fpage>144</fpage>
    <lpage>162</lpage>
    <comment>[Journal]</comment>
  </element-citation>
</ref>

<ref id="ref16">
  <element-citation publication-type="journal">
    <person-group person-group-type="author">
      <name><surname>Sanei</surname><given-names>S.</given-names></name>
      <name><surname>Hassani</surname><given-names>H.</given-names></name>
    </person-group>
    <article-title>Optimal Parameter Selection in Singular Spectrum Analysis: A Comprehensive Guide</article-title>
    <source>Signal Processing</source>
    <year>2020</year>
    <volume>165</volume>
    <fpage>107</fpage>
    <lpage>119</lpage>
    <comment>[Journal]</comment>
  </element-citation>
</ref>

<ref id="ref17">
  <element-citation publication-type="journal">
    <person-group person-group-type="author">
      <name><surname>Saputro</surname><given-names>D. R. S.</given-names></name>
      <name><surname>Prasetyo</surname><given-names>H.</given-names></name>
      <name><surname>Wibowo</surname><given-names>A.</given-names></name>
      <name><surname>Khairina</surname><given-names>F.</given-names></name>
      <name><surname>Sidiq</surname><given-names>K.</given-names></name>
      <name><surname>Wibowo</surname><given-names>G. N. A.</given-names></name>
    </person-group>
    <article-title>Bibliometric Analysis of Neural Basis Expansion Analysis for Interpretable Time Series (N-Beats) for Research Trend Mapping</article-title>
    <source>BAREKENG: Jurnal Ilmu Matematika dan Terapan</source>
    <year>2023</year>
    <volume>17</volume>
    <issue>2</issue>
    <fpage>1103</fpage>
    <lpage>1112</lpage>
    <comment>[Journal]</comment>
  </element-citation>
</ref>

<ref id="ref18">
  <element-citation publication-type="journal">
    <person-group person-group-type="author">
      <name><surname>Yatscoff</surname><given-names>R. W.</given-names></name>
      <name><surname>Hayter</surname><given-names>J.</given-names></name>
    </person-group>
    <article-title>Bibliometric evaluations of modern Clinical Chemistry are needed</article-title>
    <source>BAREKENG: Jurnal Ilmu Matematika dan Terapan</source>
    <year>1983</year>
    <volume>29</volume>
    <issue>10</issue>
    <fpage>1982</fpage>
    <lpage>1983</lpage>
    <comment>[Journal]</comment>
  </element-citation>
</ref>

<ref id="ref19">
  <element-citation publication-type="journal">
    <person-group person-group-type="author">
      <name><surname>Zhang</surname><given-names>X.</given-names></name>
      <name><surname>Zhang</surname><given-names>Y.</given-names></name>
      <name><surname>Chen</surname><given-names>J.</given-names></name>
    </person-group>
    <article-title>Global Data Growth Analysis and Forecasting: Challenges and Opportunities</article-title>
    <source>IEEE Transactions on Big Data</source>
    <year>2021</year>
    <volume>7</volume>
    <issue>2</issue>
    <fpage>321</fpage>
    <lpage>334</lpage>
    <comment>[Journal]</comment>
  </element-citation>
</ref>

<ref id="ref20">
  <element-citation publication-type="journal">
    <person-group person-group-type="author">
      <name><surname>Zotov</surname><given-names>L.</given-names></name>
      <name><surname>Shlemov</surname><given-names>A.</given-names></name>
    </person-group>
    <article-title>Algebraic Properties of Hankel Matrices in SSA: A Theoretical Analysis</article-title>
    <source>Linear Algebra and its Applications</source>
    <year>2021</year>
    <volume>609</volume>
    <fpage>123</fpage>
    <lpage>142</lpage>
    <comment>[Journal]</comment>
  </element-citation>
</ref>

</ref-list>
</sec>
</body>
</article>
