<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.3 20210610//EN" "https://jats.nlm.nih.gov/publishing/1.3/JATS-journalpublishing1-3.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" dtd-version="1.3" article-type="research-article">
    <front>
        <journal-meta>
            <journal-id journal-id-type="issn">2963-4547</journal-id>
            <journal-title-group>
                <journal-title>Asian Journal of Management Analytics (AJMA)</journal-title>
                <abbrev-journal-title>Asian Journal of Management Analytics (AJMA)</abbrev-journal-title>
            </journal-title-group>
            <issn pub-type="epub">2963-4547</issn>
            <issn pub-type="ppub">2963-4547</issn>
            <publisher>
                <publisher-name>Formosa Publisher</publisher-name>
                <publisher-loc>Jl. Sutomo Ujung No.28 D, Durian, Kecamatan Medan Timur, Kota Medan, Sumatera Utara 20235, Indonesia.</publisher-loc>
            </publisher>
        </journal-meta>
        <article-meta>
            <article-id pub-id-type="doi">10.55927/ajma.v4i3.14997</article-id>
            <article-categories/>
            <title-group>
                <article-title>Managerial Consequences of the Algorithmic Determination of Incentive Decisions on Procedural Justice Perceptions in Production Line</article-title>
            </title-group>
            <contrib-group>
                <contrib contrib-type="author">
                    <name>
                        <surname>Warsono</surname>
                        <given-names>Heribertus Yudho</given-names>
                    </name>
                    <xref ref-type="corresp" rid="cor-0"/>
                </contrib>
                <contrib contrib-type="author">
                    <name>
                        <surname>Widada</surname>
                        <given-names>Dharma</given-names>
                    </name>
                </contrib>
                <contrib contrib-type="author">
                    <name>
                        <surname>Sumarliani</surname>
                        <given-names>Sri</given-names>
                    </name>
                </contrib>
            </contrib-group>
            <author-notes>
                <corresp id="cor-0">
                    <p>
                        <bold>Corresponding author:</bold> Heribertus Yudho Warsono  
                        <email>heribertusyudho@gmail.com</email>
                    </p>
                </corresp>
            </author-notes>
            <pub-date date-type="collection" iso-8601-date="2025-7-20">
                <day>20</day>
                <month>7</month>
                <year>2025</year>
            </pub-date>
            <volume>4</volume>
            <issue>3</issue>
            <issue-title>Managerial Consequences of the Algorithmic Determination of   Incentive Decisions on Procedural Justice Perceptions in   Production Line</issue-title>
            <fpage>1291</fpage>
            <lpage>1302</lpage>
            <history>
                <date date-type="received" iso-8601-date="2025-6-7">
                    <day>7</day>
                    <month>6</month>
                    <year>2025</year>
                </date>
                <date date-type="rev-recd" iso-8601-date="2025-6-24">
                    <day>24</day>
                    <month>6</month>
                    <year>2025</year>
                </date>
                <date date-type="accepted" iso-8601-date="2025-7-26">
                    <day>26</day>
                    <month>7</month>
                    <year>2025</year>
                </date>
            </history>
            <permissions>
                <copyright-holder>Formosa Publisher</copyright-holder>
                <license>
                    <ali:license_ref xmlns:ali="http://www.niso.org/schemas/ali/1.0/">https://journal.formosapublisher.org/licenses/by/4.0/</ali:license_ref>
                    <license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
                </license>
            </permissions>
            <self-uri xlink:href="https://journal.formosapublisher.org/index.php/ajma" xlink:title="Managerial Consequences of the Algorithmic Determination of Incentive Decisions on Procedural Justice Perceptions in Production Line">Managerial Consequences of the Algorithmic Determination of Incentive Decisions on Procedural Justice Perceptions in Production Line</self-uri>
            <abstract>
                <p>This study investigates the managerial consequences of algorithm-based decision-making in incentive allocation on production line workers’ perceptions  of  procedural  fairness.  Grounded  in organizational justice theory and the socio-technical systems framework, the research explores how  algorithm  transparency,  process  clarity,  and perceived impartiality influence fairness judgments. Using a mixed-method approach, data were  collected  from  186  factory  workers  through surveys, complemented by in-depth interviews, and analyzed using structural equation modeling. The findings reveal that low algorithm transparency correlates negatively with perceptions of procedural fairness, while the absence of human involvement in decision-making reduces trust in managerial systems. These results emphasize the need for human-centered algorithm design  and  fair  management  practices  to  uphold employee trust and fairness in Industry 4.0 environments.</p>
            </abstract>
            <kwd-group>
                <kwd>Algorithmic 
 Management</kwd>
                <kwd>Procedural 
 Justice</kwd>
                <kwd>Incentive System</kwd>
                <kwd>Production Line</kwd>
                <kwd>Industrial 
 Organization</kwd>
            </kwd-group>
            <custom-meta-group>
                <custom-meta>
                    <meta-name>File created by JATS Editor</meta-name>
                    <meta-value>
                        <ext-link ext-link-type="uri" xlink:href="https://jatseditor.com" xlink:title="JATS Editor">JATS Editor</ext-link>
                    </meta-value>
                </custom-meta>
                <custom-meta>
                    <meta-name>issue-created-year</meta-name>
                    <meta-value>2023</meta-value>
                </custom-meta>
            </custom-meta-group>
        </article-meta>
    </front>
    <body>
        <sec>
            <title>INTRODUCTION</title>
            <p>The advancement of digital technology in the last decade has brought a wave of significant transformations in organizational management systems, particularly in the manufacturing sector. One of the most tangible forms of transformation is the adoption of algorithmic technology in decision-making processes that were previously dominated by human roles. This technology comes through the integration of data-driven systems, artificial intelligence, and performance measurement automation, which allows management to make decisions faster, more precisely, and free from personal bias. In the context of production, algorithmic systems are now used not only to set work schedules and monitor productivity, but also to determine incentives for workers based on automatically predetermined parameters <xref ref-type="bibr" rid="">(Savastano et al., 2019)</xref>.
            </p>
            <p>This phenomenon has given rise to a new paradigm in industrial relations, where traditional authority previously held by direct superiors or managers is now partially shifted to code-based systems and computational logic. On the one hand, this approach offers high efficiency, consistency of decisions, and savings in administrative costs. However, on the other hand, there are concerns about the social and psychological impact it causes, especially related to the perception of fairness, transparency, and participation in the decision-making process that has a direct impact on employee welfare <xref ref-type="bibr" rid="">(Aloisi, 2024)</xref>.
            </p>
            <p>In the traditional incentive system, workers can still have dialogues, raise objections, or receive explanations from management about the amount and basis for awarding. However, when such decisions are fully driven by nontransparent and elusive algorithms, workers risk losing understanding and control over the process. This can lead to a perception of procedural unfairness, especially when employees feel that the process does not take into account individual contexts, does not allow room for clarification, and does not involve them as subjects who have the right to know the reasons behind the decision received <xref ref-type="bibr" rid="">(Ott &amp; Dabrock, 2022)</xref>.
            </p>
            <p>Procedural fairness is an important element in the formation of trust in the managerial system. When workers feel that the process used to assess and reward is done fairly, they tend to accept the results with a positive attitude, even when they don't fully align with expectations. On the other hand, if the process is considered unfair, non-transparent, or does not provide space for input, then dissatisfaction will increase and can have an impact on decreased motivation, resistance to the system, and weakening of the working relationship between workers and organizations. In the context of mass production, where the relationship between individuals and systems is highly mechanistic, perceptions of procedural fairness can be a determining factor in long-term performance and loyalty <xref ref-type="bibr" rid="">(Kuusisto, 2024)</xref>.
            </p>
            <p>Further, the presence of algorithmic systems without human involvement in the decision-making process raises questions about the legitimacy and accountability of management. Can a system that cannot be questioned, cannot explain, and cannot empathize be judged fair by the human being affected? Is the efficiency generated by automated systems comparable to the loss of a sense of involvement, trust, and recognition of personal values in the work environment? These questions are becoming important, especially in the era of Industry 4.0, where organizations are required to be not only productive, but also socially and ethically sustainable.</p>
            <p>This research is here to answer this challenge by empirically examining the impact of algorithm-based incentive determination on the perception of procedural fairness in the production line. The main focus of this study is to understand how workers judge the fairness of the machine-determined incentive system, and how those perceptions shape responses to organizations more broadly. This study uses a mixed method approach that combines quantitative data from surveys with qualitative data from in-depth interviews, thus allowing a comprehensive understanding of this complex phenomenon.</p>
            <p>The aim of this study was to identify the algorithmic elements that influence the perception of procedural fairness, including aspects of transparency, explainability, and objectivity; evaluate the role of human involvement in mitigating negative perceptions of algorithmic systems; as well as providing practical recommendations for management in designing an algorithm-based incentive system that is not only efficient, but also fair, acceptable, and able to strengthen industrial relations in the digital era.</p>
        </sec>
        <sec>
            <title>LITERATURE REVIEW</title>
            <sec>
                <title>Algorithmic Management in the Context of Production</title>
                <p>Algorithmic management is a new form of organizational control that emerged due to the integration of digital technology in the managerial decision-making process <xref ref-type="bibr" rid="">(Schildt, 2017)</xref>. The system uses computational logic and big data processing to perform functions previously performed by human managers, such as work scheduling, performance evaluation, and incentivizing. In the context of production lines, algorithmic systems are generally programmed to assess output based on quantitative parameters such as the number of units per time, the production error rate, or the response time to a task. This technology is claimed to be able to increase efficiency, objectivity, and consistency, while reducing the subjectivity that may arise in traditional decision-making <xref ref-type="bibr" rid="">(Sotskov, 2023)</xref>.
                </p>
                <p>However, this automated system also brings new challenges, especially in terms of the relationship between humans and technology in the work process. When decisions are made by algorithms that cannot interact or respond to individual contexts, there is a potential for dissonance between workers' expectations and the results they receive. In practice, many workers feel that the policies generated by such systems do not take into account the actual working conditions, especially if there is no room for explanation or feedback. This encourages the need for a critical evaluation of the managerial impact of the use of algorithmic systems, particularly in the aspects of procedural fairness and organizational legitimacy <xref ref-type="bibr" rid="">(Martin &amp; Waldman, 2023)</xref>.
                </p>
            </sec>
            <sec>
                <title>Procedural Justice and Employee Perception</title>
                <p>Procedural fairness refers to the extent to which individuals judge that the processes used to make decisions in an organization are done fairly. Key dimensions of procedural fairness include consistency in the application of rules, neutrality of decision-makers, accuracy of the information used, opportunity to be heard, and adequate explanation of the decisions taken. In the context of incentives, workers not only assess the final outcome (e.g. the amount of bonuses or rewards), but also assess the extent to which the process used to determine the outcome is considered transparent, reasonable, and fair (SimanTov-Nachlieli &amp; Bamberger, 2021).</p>
                <p>The application of algorithmic systems changed this dynamic significantly. Decisions that are usually explained directly by superiors or managers are now taken by automated systems that cannot always provide reasons or narratives that can be understood by workers. When workers don't know how the system works, or don't have access to the logic of the decisions used, they tend to judge the process as non-transparent and unfair. This sense of injustice can be exacerbated if there is no room for workers to file complaints or obtain clarification. Therefore, the perception of procedural fairness in a work environment dominated by algorithmic systems is an important issue in modern human resource management <xref ref-type="bibr" rid="">(Lee, 2018)</xref>.
                </p>
            </sec>
            <sec>
                <title>Interaction between Technology, Incentives, and Employee Autonomy</title>
                <p>The relationship between technology and employee motivation is not linear. While digital systems such as algorithms can provide a more efficient work structure, they also have the potential to reduce workers' autonomy and sense of involvement in the work process. In the context of incentivizing, human ininvolvement in the decision-making process can weaken the perception of control and eliminate the relational dimension that has been an important part of industrial relations <xref ref-type="bibr" rid="">(Zhou, 2016)</xref>. When workers feel treated as mere numbers in the system, the legitimacy of management as an institution that upholds the values of justice and participation will be eroded <xref ref-type="bibr" rid="">(Berman et al., 2021)</xref>.
                </p>
                <p>An approach that completely leaves the incentive process to algorithms can create a psychological distance between workers and organizations. Therefore, some studies suggest the need for a hybrid approach, which is the integration of an efficient algorithmic system with human supervision that is able to bridge the social, emotional, and ethical aspects of the decision-making process <xref ref-type="bibr" rid="">(Rodgers et al., 2023)</xref>. Thus, procedural justice is not only the responsibility of the system, but must also be maintained through human interaction within a humanistic and participatory framework <xref ref-type="bibr" rid="">(Enarsson et al., 2022)</xref>.
                </p>
            </sec>
        </sec>
        <sec>
            <title>METHODOLOGY</title>
            <p>This study uses a mixed-methods approach with a multi-level explanatory design, which combines quantitative and qualitative techniques in an integrated manner. This approach was chosen to gain a more comprehensive understanding of the influence of algorithmic systems on the perception of procedural justice in production line workers, while capturing subjective dimensions that cannot be fully explained through numerical data. Explanatory design is used so that the findings of quantitative analysis can be deepened and explained qualitatively through in-depth interviews <xref ref-type="bibr" rid="">(Tortorella et al., 2017)</xref>.
            </p>
            <sec>
                <title>Population and Sample</title>
                <p>The population in this study is production line workers in three large manufacturing factories in the West Java region who have applied an algorithmic system in the process of providing incentives and performance assessments. The sampling technique was carried out using a purposive sampling approach, with the criteria of respondents who have worked for at least one year and are actively involved in the algorithmic-based system. A total of 186 respondents were successfully collected for the quantitative survey stage, while 12 key informants were selected for the in-depth interview stage, consisting of workers, supervisors, and HR managers.</p>
            </sec>
            <sec>
                <title>Instruments and Data Collection</title>
                <p>For quantitative data collection, the main instrument is a closed questionnaire prepared based on indicators of perception of procedural fairness, including system transparency, algorithm ability to provide explanations (explainability), perception of impartiality, and perception of control over decisions. Each indicator is measured using a 5-point Likert scale from strongly disagree (1) to strongly agree (5). The questionnaire has been tested for validity and reliability through a pilot test of 30 respondents. Meanwhile, for the qualitative stage, data was collected through semi-structured interviews to explore the narrative and subjective experiences of respondents towards how the algorithm-based incentive system works. The interviews were conducted in person and recorded with the consent of the participants, then transcribed and analyzed thematically.</p>
            </sec>
            <sec>
                <title>Data Analysis Techniques</title>
                <p>Quantitative data were analyzed using Structural Equation Modeling (SEM) based on SmartPLS software to test the relationship between latent variables, especially the influence of transparency, explainability, and perception of algorithm objectivity on the perception of procedural fairness <xref ref-type="bibr" rid="">(Duryadi, 2024)</xref>. SEM analysis was chosen because it is able to capture multivariate relationships in complex conceptual models and accommodate non-normal data.</p>
                <p>Qualitative data were analyzed using thematic analysis techniques with an inductive approach. The researcher identified narrative patterns, expressions of trust or dissatisfaction, and the dynamics of workers' perceptions of the existence of algorithmic systems in incentive decision-making. These qualitative findings are then used to explain or reinforce the results of quantitative findings, resulting in in-depth triangulation.</p>
            </sec>
            <sec>
                <title>Research Ethics</title>
                <p>This research adheres to the ethical principles of social research, including maintaining the confidentiality of respondents' identities, obtaining informed consent, and disguising the name of the institution to avoid the risk of exposure to sensitive information. All data is collected and stored securely, and is only used for academic purposes within the scope of this research.</p>
            </sec>
        </sec>
        <sec>
            <title>RESEARCH RESULTS</title>
            <sec>
                <title>Quantitative Findings: SEM Analysis of Perceptions of Procedural Justice</title>
                <p>Quantitative analysis was conducted on data obtained from 186 frontline production workers through a structured survey instrument designed to measure their perceptions of procedural justice within an algorithmic incentive system. The dataset was processed using Structural Equation Modeling (SEM) with the SmartPLS software to evaluate the strength and direction of relationships between three latent predictor variablesalgorithm transparency, system explainability, and perceived objectivity-and the dependent variable, perceived procedural fairness <xref ref-type="bibr" rid="">(Alshurideh et al., 2023)</xref>.
                </p>
                <p>The measurement model demonstrated strong construct validity and reliability. All items showed outer loading values exceeding 0.70, and the Average Variance Extracted (AVE) for each construct surpassed the 0.50 threshold, indicating satisfactory convergent validity. In addition, the composite reliability of each latent construct exceeded 0.80, reinforcing internal consistency.</p>
                <p>The structural model further revealed that all three predictor variables had a statistically significant influence on procedural fairness perception. Specifically:</p>
                <p>Table 1. Ssignificant influence on procedural fairness perception</p>
                <table-wrap id="table-1">
                    <label>Table 1. Ssignificant influence on procedural fairness perception</label>
                    <table frame="box" rules="all">
                        <thead>
                            <tr>
                                <th colspan="1" rowspan="1" style="" align="left" valign="top">
                                    <p>Variable Predictor</p>
                                </th>
                                <th colspan="1" rowspan="1" style="" align="left" valign="top">
                                    <p>Line Coefficient (β)</p>
                                </th>
                                <th colspan="1" rowspan="1" style="" align="left" valign="top">p-value</th>
                                <th colspan="1" rowspan="1" style="" align="left" valign="top">
                                    <p>Interpretation of Influence</p>
                                </th>
                            </tr>
                        </thead>
                        <tbody>
                            <tr>
                                <td colspan="1" rowspan="1" style="" align="left" valign="top">
                                    <p>Algorithm Transparency</p>
                                </td>
                                <td colspan="1" rowspan="1" style="" align="left" valign="top">
                                    <p>0.42</p>
                                </td>
                                <td colspan="1" rowspan="1" style="" align="left" valign="top">&lt; 0.001</td>
                                <td colspan="1" rowspan="1" style="" align="left" valign="top">
                                    <p>Strong and very significant</p>
                                    <p>positive influence</p>
                                </td>
                            </tr>
                            <tr>
                                <td colspan="1" rowspan="1" style="" align="left" valign="top">
                                    <p>Ability to Explain</p>
                                </td>
                                <td colspan="1" rowspan="1" style="" align="left" valign="top">
                                    <p>0.31</p>
                                </td>
                                <td colspan="1" rowspan="1" style="" align="left" valign="top">0.004</td>
                                <td colspan="1" rowspan="1" style="" align="left" valign="top">
                                    <p>Significant</p>
                                    <p>positive influence</p>
                                </td>
                            </tr>
                            <tr>
                                <td colspan="1" rowspan="1" style="" align="left" valign="top">
                                    <p>Perception of Objectivity</p>
                                </td>
                                <td colspan="1" rowspan="1" style="" align="left" valign="top">
                                    <p>0.22</p>
                                </td>
                                <td colspan="1" rowspan="1" style="" align="left" valign="top">0.017</td>
                                <td colspan="1" rowspan="1" style="" align="left" valign="top">
                                    <p>Moderate and</p>
                                    <p>significant positive influence</p>
                                </td>
                            </tr>
                        </tbody>
                    </table>
                </table-wrap>
                <p>Algorithm transparency emerged as the most influential factor, with a path coefficient of β = 0.42 (p &lt; 0.001). This suggests that when workers perceive the algorithm’s decision-making logic as clear, accessible, and traceable, their sense of procedural fairness improves substantially. Transparency likely enhances feelings of inclusion and reduces perceived arbitrariness, reinforcing the legitimacy of the incentive system.</p>
                <list list-type="order">
                    <list-item>
                        <p>Explainability, or the system’s perceived ability to provide justifiable reasons behind its decisions, also had a significant positive effect on procedural justice perception (β = 0.31; p = 0.004). This indicates that workers value systems that offer not only accurate outcomes but also meaningful explanations, enabling them to make sense of how and why certain incentive outcomes are assigned. The presence of explainability appears to bridge the communication gap between automated systems and human expectations for accountability.</p>
                    </list-item>
                    <list-item>
                        <p>Perceived objectivity had a moderate but still significant impact on fairness perception (β = 0.22; p = 0.017). This result suggests that while objectivity is important, it is not sufficient on its own to cultivate perceptions of fairness if not accompanied by transparency and clarity. Workers may acknowledge a system’s neutrality but still question its fairness if the process is opaque or uncommunicated.</p>
                    </list-item>
                    <list-item>
                        <p>Collectively, the structural model yielded an R² value of 0.61, meaning that the three independent variables—transparency, explainability, and objectivity—collectively explain 61% of the variance in workers’ perceptions of procedural justice. This is a substantial explanatory power for a psychological and perception-based construct, underscoring the salience of algorithmic features in shaping fairness judgments in automated work environments.</p>
                    </list-item>
                </list>
                <p>These findings highlight that perceptions of fairness in algorithm-driven systems are multidimensional and depend not only on outcome impartiality, but also on the processual clarity and communicability of decision logic. The high explanatory power of the model points to the importance of human-centered algorithm design in ensuring ethical and socially sustainable management practices in modern industrial contexts.</p>
            </sec>
            <sec>
                <title>Qualitative Findings: Psychological Dynamics in the Face of Algorithmic Decisions</title>
                <p>The results of in-depth interviews with 12 informants enriched the understanding of quantitative findings by describing the subjective dimension of workers' experiences in the face of algorithm-based incentive decisions. Three main themes emerged from the thematic analysis process:</p>
                <list list-type="order">
                    <list-item>
                        <p>Lack of understanding of the Assessment Mechanism</p>
                    </list-item>
                </list>
                <p>Most respondents revealed that they did not understand how the system determines who is eligible to receive incentives. The decision is considered to have come "suddenly" without any prior notice or room to ask. This raises uncertainty and anxiety about the fairness of the system.</p>
                <list list-type="order">
                    <list-item>
                        <p>Lack of Dialogue or Explanation from Superiors</p>
                    </list-item>
                </list>
                <p>Some respondents stated that although they felt their performance was good, they were not incentivized because "the numbers in the system are insufficient." They feel that the system is the "sole determinant" that does not provide room for clarification. The absence of human involvement in explaining the decision creates an emotional distance between workers and management.</p>
                <list list-type="order">
                    <list-item>
                        <p>Loss of Control and Decreased Trust</p>
                    </list-item>
                </list>
                <p>This theme appears strongly in most informants. They express that they lose their sense of belonging to the work process because they feel judged by a system that they do not understand and cannot influence. This condition has an impact on decreased motivation, the emergence of passive resistance, and distrust of the objectivity of the organization.</p>
            </sec>
            <sec>
                <title>Integration of Findings: Correlation between Systems and Work Psychology</title>
                <p>The integration between quantitative and qualitative results shows a close relationship between algorithmic system design and the psychological state of workers. While systems can improve operational efficiency, the findings suggest that human ininvolvement in bridging algorithmic decisions can lower perceptions of fairness, ultimately affecting trust and job satisfaction. These findings reinforce the importance of system transparency and managerial intervention in communicating the basics of the decisions made by algorithms to workers, so that the process remains perceived as fair and acceptable.</p>
            </sec>
        </sec>
        <sec>
            <title>DISCUSSIONS</title>
            <p>The results of this study show that the use of algorithmic systems in determining incentives on the production line has a significant impact on the perception of procedural fairness among workers. These findings confirm that although algorithm-based systems are designed to improve efficiency and objectivity in managerial decision-making, their non-participatory design and implementation can instead create perceptions of injustice, psychological dissonance, and decreased trust in the organization.</p>
            <p>Quantitatively, algorithm transparency has been shown to be the most influential factor in the perception of procedural fairness. This indicates that workers highly value clarity and openness in the process that determines their incentives. When the working logic of the algorithm cannot be understood or explained, workers feel disconnected from the assessment process that determines their fate. This confirms that in the modern work environment, information disclosure is still a key pillar in building trust, even as technology replaces the role of humans.</p>
            <p>Furthermore, the findings related to the low explainability of algorithms highlight the importance of the communication dimension in the acceptance of managerial decisions. In the context of repetitive and target-based manual work on the production line, workers expect room for clarification or feedback, especially when their work does not match the rewards received. The absence of explanation gives rise to the perception of procedural injustice, because workers feel the system is "judging from afar" without understanding the personal context or real obstacles faced in the field.</p>
            <p>The perception of the objectivity of the algorithm, although it has a positive effect, is not enough to compensate for the weaknesses in the aspects of transparency and explainability. This suggests that the assessment of fairness depends not only on the final outcome or accuracy of the calculations, but also on how the process is perceived by the individuals involved. In this case, technical objectivity does not necessarily mean social justice if it is not accompanied by communication, engagement, and empathy.</p>
            <p>From a qualitative perspective, in-depth interviews show that many workers experience emotional disconnection with the organization due to the dominance of a system that is considered humanly unreachable. They feel a loss of control over the processes that affect their well-being, resulting in passive resistance in the form of decreased work motivation or distrust of management. This phenomenon highlights the importance of the humanistic dimension in the design of organizational technology. Algorithms are not just technical tools, but part of a social system that interacts with individuals who have perceptions, values, and emotions.</p>
            <p>These findings have profound managerial implications. First, while algorithmic systems offer precision and efficiency, organizational managers still need to place the role of humans as equalizers and facilitators of communication in the decision-making process. Second, a change management strategy is needed that is able to build algorithmic literacy among workers so that they are not only objects of the system, but also have an understanding and involvement in how the system works. Third, transparency and dialogue need to be formalized in the work system, for example by providing channels for clarification of incentives, documentation of reasons for decisions, or the provision of periodic reports on individual performance in the system.</p>
            <p>Theoretically, the results of this study expand our understanding of how procedural justice is reinterpreted in the digital age, where organizational decisions are no longer entirely taken by humans. In the context of algorithmic management, procedural fairness is not only about whether rules are applied consistently, but also about whether the system can be explained, understood, and questioned. Therefore, it is necessary to redefine the principles of justice within the framework of smart technology, so that they remain relevant and adaptive to the dynamics of today's working relationships.</p>
        </sec>
        <sec>
            <title>CONCLUSIONS AND RECOMMENDATIONS</title>
            <p>This study shows that the use of algorithmic systems in incentive decisionmaking in the production line has a real impact on the perception of procedural fairness of workers. Although this technology is designed to improve efficiency, consistency, and objectivity, the results of this study reveal that the transparency, explainability, and objectivity perception of algorithms play an important role in shaping the perception of fairness.</p>
            <p>Quantitative findings show that the level of algorithm transparency has the most significant influence on the perception of procedural fairness. These findings are reinforced by qualitative data that reveal feelings of misunderstanding and loss of control among workers, due to the absence of humane explanations in decisions that have a direct impact on their well-being.</p>
            <p>Overall, it can be concluded that the application of algorithmic technologies in incentive systems should be balanced with managerial interventions that focus on human engagement, system literacy at the worker level, and the creation of dialogue spaces. A fully automated approach risks lowering the legitimacy of the managerial system, weakening work motivation, and disrupting healthy industrial relations. Therefore, the principle of procedural justice in the digital era not only demands system efficiency, but also demands the sustainability of fair and participatory social relations.</p>
        </sec>
        <sec>
            <title>ADVANCED RESEARCH</title>
            <p>Building on these findings, future research should explore the longitudinal impact of algorithmic decision-making on worker morale, trust, and organizational commitment across different industrial contexts. Investigating how variations in algorithm design-such as the inclusion of feedback mechanisms, explainable AI features, and degrees of human oversight-affect fairness perceptions could provide deeper insights into optimizing sociotechnical integration. Comparative studies across cultures or industries with differing levels of digital maturity may also reveal contextual moderators that influence how algorithmic systems are perceived and accepted by workers. Furthermore, experimental research involving simulated incentive scenarios could help isolate causal mechanisms and inform more adaptive, humancentered algorithmic governance models in high-efficiency work environments.</p>
        </sec>
    </body>
    <back>
        <ref-list>
            <ref id="BIBR-1">
                <element-citation publication-type="">
                    <article-title>Regulating algorithmic management at work in the European</article-title>
                    <person-group person-group-type="author">
                        <name>
                            <surname>Aloisi</surname>
                            <given-names>A.</given-names>
                        </name>
                    </person-group>
                    <year>2024</year>
                </element-citation>
            </ref>
            <ref id="BIBR-2">
                <element-citation publication-type="">
                    <person-group person-group-type="author">
                        <name>
                            <surname>Alshurideh</surname>
                            <given-names>M.</given-names>
                        </name>
                        <name>
                            <surname>Al Kurdi</surname>
                            <given-names>B.</given-names>
                        </name>
                        <name>
                            <surname>Salloum</surname>
                            <given-names>S.A.</given-names>
                        </name>
                        <name>
                            <surname>Arpaci</surname>
                            <given-names>I.</given-names>
                        </name>
                        <name>
                            <surname>Al-Emran</surname>
                            <given-names>M.</given-names>
                        </name>
                    </person-group>
                    <year>2023</year>
                </element-citation>
            </ref>
            <ref id="BIBR-3">
                <element-citation publication-type="">
                    <article-title>Human</article-title>
                    <person-group person-group-type="author">
                        <name>
                            <surname>Berman</surname>
                            <given-names>E.M.</given-names>
                        </name>
                        <name>
                            <surname>Bowman</surname>
                            <given-names>J.S.</given-names>
                        </name>
                        <name>
                            <surname>West</surname>
                            <given-names>J.P.</given-names>
                        </name>
                        <name>
                            <surname>Wart</surname>
                            <given-names>M.R.</given-names>
                        </name>
                    </person-group>
                    <year>2021</year>
                </element-citation>
            </ref>
            <ref id="BIBR-4">
                <element-citation publication-type="book">
                    <article-title>Processing</article-title>
                    <person-group person-group-type="author">
                        <name>
                            <surname>Duryadi</surname>
                            <given-names>M.</given-names>
                        </name>
                    </person-group>
                    <year>2024</year>
                    <publisher-name>Analyzing And Testing Quantitative Research</publisher-name>
                </element-citation>
            </ref>
            <ref id="BIBR-5">
                <element-citation publication-type="">
                    <article-title>Approaching the human in</article-title>
                    <person-group person-group-type="author">
                        <name>
                            <surname>Enarsson</surname>
                            <given-names>T.</given-names>
                        </name>
                        <name>
                            <surname>Enqvist</surname>
                            <given-names>L.</given-names>
                        </name>
                        <name>
                            <surname>Naarttijärvi</surname>
                            <given-names>M.</given-names>
                        </name>
                    </person-group>
                    <year>2022</year>
                </element-citation>
            </ref>
            <ref id="BIBR-6">
                <element-citation publication-type="">
                    <article-title>Towards Pay Transparency: The Impacts of EU Pay Transparency</article-title>
                    <person-group person-group-type="author">
                        <name>
                            <surname>Kuusisto</surname>
                            <given-names>K.</given-names>
                        </name>
                    </person-group>
                    <year>2024</year>
                </element-citation>
            </ref>
            <ref id="BIBR-7">
                <element-citation publication-type="">
                    <article-title>Understanding perception of algorithmic decisions: Fairness</article-title>
                    <person-group person-group-type="author">
                        <name>
                            <surname>Lee</surname>
                            <given-names>M.K.</given-names>
                        </name>
                    </person-group>
                    <year>2018</year>
                </element-citation>
            </ref>
            <ref id="BIBR-8">
                <element-citation publication-type="">
                    <article-title>Are algorithmic decisions legitimate? The</article-title>
                    <person-group person-group-type="author">
                        <name>
                            <surname>Martin</surname>
                            <given-names>K.</given-names>
                        </name>
                        <name>
                            <surname>Waldman</surname>
                            <given-names>A.</given-names>
                        </name>
                    </person-group>
                    <year>2023</year>
                </element-citation>
            </ref>
            <ref id="BIBR-9">
                <element-citation publication-type="">
                    <article-title>Transparent human–(non-) transparent technology?</article-title>
                    <person-group person-group-type="author">
                        <name>
                            <surname>Ott</surname>
                            <given-names>T.</given-names>
                        </name>
                        <name>
                            <surname>Dabrock</surname>
                            <given-names>P.</given-names>
                        </name>
                    </person-group>
                    <year>2022</year>
                </element-citation>
            </ref>
            <ref id="BIBR-10">
                <element-citation publication-type="">
                    <person-group person-group-type="author">
                        <name>
                            <surname>Rodgers</surname>
                            <given-names>W.</given-names>
                        </name>
                        <name>
                            <surname>Murray</surname>
                            <given-names>J.M.</given-names>
                        </name>
                        <name>
                            <surname>Stefanidis</surname>
                            <given-names>A.</given-names>
                        </name>
                        <name>
                            <surname>Degbey</surname>
                            <given-names>W.Y.</given-names>
                        </name>
                        <name>
                            <surname>Tarba</surname>
                            <given-names>S.Y.</given-names>
                        </name>
                    </person-group>
                    <year>2023</year>
                </element-citation>
            </ref>
            <ref id="BIBR-11">
                <element-citation publication-type="">
                    <article-title>Contextual</article-title>
                    <person-group person-group-type="author">
                        <name>
                            <surname>Savastano</surname>
                            <given-names>M.</given-names>
                        </name>
                        <name>
                            <surname>Amendola</surname>
                            <given-names>C.</given-names>
                        </name>
                        <name>
                            <surname>Bellini</surname>
                            <given-names>F.</given-names>
                        </name>
                        <name>
                            <surname>D’Ascenzo</surname>
                            <given-names>F.</given-names>
                        </name>
                    </person-group>
                    <year>2019</year>
                </element-citation>
            </ref>
            <ref id="BIBR-12">
                <element-citation publication-type="">
                    <article-title>Big data and organizational design–the brave new world of</article-title>
                    <person-group person-group-type="author">
                        <name>
                            <surname>Schildt</surname>
                            <given-names>H.</given-names>
                        </name>
                    </person-group>
                    <year>2017</year>
                </element-citation>
            </ref>
            <ref id="BIBR-13">
                <element-citation publication-type="">
                    <article-title>Pay communication, justice, and</article-title>
                    <person-group person-group-type="author">
                        <name>
                            <surname>SimanTov-Nachlieli</surname>
                            <given-names>I.</given-names>
                        </name>
                        <name>
                            <surname>Bamberger</surname>
                            <given-names>P.</given-names>
                        </name>
                    </person-group>
                    <year>2021</year>
                </element-citation>
            </ref>
            <ref id="BIBR-14">
                <element-citation publication-type="">
                    <article-title>Assembly and production line designing, balancing and</article-title>
                    <person-group person-group-type="author">
                        <name>
                            <surname>Sotskov</surname>
                            <given-names>Y.N.</given-names>
                        </name>
                    </person-group>
                    <year>2023</year>
                </element-citation>
            </ref>
            <ref id="BIBR-15">
                <element-citation publication-type="">
                    <article-title>Lean</article-title>
                    <person-group person-group-type="author">
                        <name>
                            <surname>Tortorella</surname>
                            <given-names>G.</given-names>
                        </name>
                        <name>
                            <surname>Fettermann</surname>
                            <given-names>D.</given-names>
                        </name>
                        <name>
                            <surname>Anzanello</surname>
                            <given-names>M.</given-names>
                        </name>
                        <name>
                            <surname>Sawhney</surname>
                            <given-names>R.</given-names>
                        </name>
                    </person-group>
                    <year>2017</year>
                </element-citation>
            </ref>
            <ref id="BIBR-16">
                <element-citation publication-type="article-journal">
                    <article-title>The administrative subcontract: Significance, relevance and implications for intergovernmental relations in China</article-title>
                    <source>Chinese Journal of Sociology</source>
                    <volume>2</volume>
                    <issue>1</issue>
                    <person-group person-group-type="author">
                        <name>
                            <surname>Zhou</surname>
                            <given-names>L.-A.</given-names>
                        </name>
                    </person-group>
                    <year>2016</year>
                    <fpage>34</fpage>
                    <lpage>74</lpage>
                    <page-range>34-74</page-range>
                </element-citation>
            </ref>
        </ref-list>
    </back>
</article>
