Integrating KIT-Based Assessment Frameworks in Doctoral Academic Programs
- Dr. Armando J. Poleo

- Nov 20, 2025
- 6 min read
Şimşit et al. (2014) identified a notable gap in the understanding and application of innovative information technology solutions within higher education programs, particularly at the master's and doctoral levels. A comprehensive literature review revealed compelling examples of the integration of Business Intelligence (BI), Artificial Intelligence (AI), and Knowledge Management (KM) across various industries, including education. Additionally, it underscored the use of assessments to stimulate innovative thinking.

According to Rayees Farooq (2018, p. 1), "Organizations can generate competitive advantage by managing social capital through knowledge management processes, which include learning orientation, knowledge sharing, organizational memory, and knowledge reuse." In 2024, Leoni et al. conducted a global study involving 52 large organizations to examine how generative AI is integrated with BI and KM within knowledge management processes to enhance decision-making. The implementation of BI, AI, and KM in evaluating academic programs serves several strategic and operational purposes.
Statement of the Problem
This study examines the limited adoption of innovative technological solutions in IT departments at public universities, focusing on their potential to enhance academic performance in online education (Nielsen & Franco, 2019). There is a growing need for IT professionals to adopt Knowledge Innovation Technology (KIT) methodologies that combine Artificial Intelligence (AI) and Business Intelligence (BI) to enhance operational effectiveness.
These methods can improve efficiency and foster innovative thinking among faculty, students, and administrative staff at public universities across the United States (Setiawan, 2021). Research by Maria (2019) suggests that AI technologies strengthen IT specialists' trust in data processing. Additionally, BI tools enable real-time data analysis and interpretation, empowering administrative staff to promptly address and resolve emerging challenges (Creswell & Creswell, 2018).
Rationale for the Assessment Framework in Doctoral Academic Programs
The integrated KM + AI + BI assessment framework is designed to uphold transparency, ethics, and accountability, with a strong emphasis on equity and inclusion. This framework employs AI to accommodate a variety of writing styles and linguistic backgrounds, uses BI to enhance accessibility for neurodiverse users, and incorporates inclusive pedagogical strategies through KM. These considerations are woven into the assessment criteria, directly supporting equity initiatives.
Doctoral education occupies a critical space at the intersection of intellectual rigor, independent inquiry, and institutional accountability. As doctoral programs evolve to meet the requirements of a data-driven, global environment, the integration of BI, AI, and KM technologies offers transformative advantages, provided they are anchored in a comprehensive, equity-focused assessment framework. Such a framework must address complexity and scalability, support individualized learning paths, manage high-stakes evaluations, and meet the diverse needs of stakeholders—students, advisors, committees, and administrators.
Despite extensive experience over six years as Chair and Mentor for more than 28 Doctoral Committees in Business, Education, and Technology, a robust assessment system that integrates seamlessly with Learning Management Systems (LMS) like Blackboard or Canvas remains absent. There is a clear need for a solution that merges Knowledge Management (KM), Artificial Intelligence (AI), and Business Intelligence (BI) technologies into a single application, ideally featuring assessment reports and automated rubrics. Implementing such an embedded assessment system would significantly improve current evaluation methods used in doctoral dissertation processes, leading to a more streamlined and effective paradigm.
BI tools can uncover patterns in dissertation progress, attrition rates, and feedback quality. AI offers personalized support, forecasts risks, and refines formative assessments, while KM systems safeguard institutional knowledge, record best practices, and foster interdisciplinary collaboration. Without a unified framework, these tools risk inconsistent application, subjective judgment, and a lack of focus on equity and transparency.
Ultimately, the assessment framework enables academic directors, faculty, system administrators, and instructional designers to make strategic, informed decisions about technology adoption, thereby strengthening institutional capacity for innovation. It evaluates how technological solutions can enhance the efficiency and effectiveness of IT professionals in achieving organizational goals. KM+AI+BI solutions are crafted to support innovative strategies for improving academic performance in both traditional and online education settings (Nielsen & Franco, 2019).
Intended Uses of the Assessment Results
Program-Level Decision Making
· Curriculum Enhancement: Utilize business intelligence tools to identify shortcomings in dissertation progress, feedback cycles, and student engagement metrics.
· Resource Allocation: Redirect resources such as writing centers and mentorship programs to areas identified by AI analytics as high-risk or underperforming.
· Policy Development: Formulate ethical guidelines for AI usage, KM access, and data transparency, informed by insights from the analytical framework.
Instructional Design & Pedagogy
· Feedback Calibration: Align faculty feedback with AI-generated formative assessments to ensure consistent and meaningful evaluation.
· Rubric Revision: Leverage performance data from assessment rubrics to refine evaluation criteria, enhancing clarity, equity, and rigor.
· Adaptive Learning Pathways: Personalize dissertation milestones and support systems using predictive analytics and KM engagement data.
Equity & Inclusion Monitoring
· Bias Identification: Examine AI-generated feedback across demographic groups to detect and address potential biases.
· Accessibility Audits: Analyze KM system logs and BI dashboards to ensure all students have equitable access to resources.
· Trauma-Informed Adjustments: Track trends in disengagement or feedback fatigue and modify instruction to support affected learners better.
Faculty Utilization of the Assessment Framework
Critical Reflection
Faculty members should thoroughly analyze BI dashboards to gain insights into student progress throughout the dissertation process. By comparing rubric scores across cohorts, they can identify patterns in academic performance and assess the effectiveness of feedback.
Instructional Modification
Incorporate AI-generated feedback as a supplemental formative assessment, especially for early drafts and iterative development. Adjust instructional approaches using KM analytics, focusing on metrics that reveal which resources are most used or overlooked.
Guidance Improvement
Use predictive analytics to pinpoint students who may require additional support. Share exemplary dissertations, committee feedback, and annotated rubric guidelines via KM repositories to enhance mentorship practices.
Reporting and Documentation
Export BI visualizations for use in accreditation documentation, program evaluations, or committee assessments. Utilize assessment findings to support pedagogical innovation and substantiate requests for more resources.
Collaborative Enhancement
Participate in faculty learning communities to exchange insights gained from the framework. Collaborate to develop new rubrics or feedback models, applying collective data analysis and AI insights to foster continuous improvement.
Conclusion and Recommendation
Conclusions
The integration of Knowledge Management (KM), Artificial Intelligence (AI), and Business Intelligence (BI) within doctoral academic programs marks a transformative shift in assessment and institutional innovation. By embedding these technologies into a unified framework, universities can overcome fragmented evaluation practices, promoting transparency, equity, and accountability.
This framework not only increases operational efficiency but also supports personalized learning, predictive analytics, and inclusive pedagogical strategies. Adopting a KIT-based assessment model empowers faculty, administrators, and students to make data-driven decisions, encouraging academic excellence and resilience in an increasingly digital and global educational environment.
Actionable Recommendations for Implementation
· Develop a Unified Platform: Integrate KM, AI, and BI functionalities into existing Learning Management Systems (LMS) such as Blackboard or Canvas. Ensure the system complies with institutional data governance policies and accreditation standards.
· Pilot the Framework in Select Programs: Launch the framework with a small cohort of doctoral students to test predictive analytics, automated rubrics, and BI dashboards. Gather feedback from faculty and students to enhance usability and equity features.
· Create Ethical and Equity Guidelines: Set clear policies regarding AI-driven feedback, data privacy, and bias monitoring. Include accessibility audits to support neurodiverse and multilingual learners.
· Train Faculty and Staff: Offer workshops on interpreting BI dashboards, using AI for formative assessments, and leveraging KM repositories for mentorship. Encourage faculty learning communities to share best practices and insights.
· Embed Continuous Improvement Mechanisms: Use BI analytics to monitor dissertation progress, attrition rates, and feedback quality. Regularly update rubrics and assessment criteria based on data trends and stakeholder input.
· Secure Funding and Resources: Advocate for institutional investment in KIT-based technologies by presenting projected improvements in efficiency, equity, and student success. Seek partnerships with ed-tech vendors for scalable solutions.
References
Creswell, J. W., & Creswell, J. D. (2023). Research design: Qualitative, quantitative, and mixed methods approach. Thousand Oaks, CA: Sage
Leonidou, E., Christofi, M., Vrontis, D., & Thrassou, A. (2020). An Integrative Framework of Stakeholder Engagement for Innovation Management and Entrepreneurship Development. Journal of Business Research. https://doi.org/10.1016/j.jbusres.2018.11.054
Maria, V. (2019). The Future of Artificial Intelligence in The Workplace. Retrieved from Forbes website: https://www.forbes.com/sites/vishalmarria/2019/01/11/the-future-of- artificial-intelligence-in-the-workplace/
Nielsen, K., & Franco, J. (2019). She is going Mobile: Promoting Faculty and Student Success with Tech Development. Journal of Faculty Development, 33(2), 49+. https://link.gale.com/apps/doc/A625235853/AONE?u=pres1571&sid=ebsco&xid=dfb3d7ee
Rayees, F. (2018). A conceptual model of knowledge sharing. International Journal of Innovation Science 8 June 2018; 10 (2): 238–260. https://doi.org/10.1108/IJIS-09-2017-0087
Şimşit, Z. T., Vayvay, Ö., & Öztürk, Ö. (2014). An outline of the innovation management process: building a framework for managers to implement innovation. Procedia-Social and Behavioral Sciences, 150, 690-699





Comments