Artificial Intelligence (AI) is ever more pervasive, as well as integrated and applicable into higher learning for teaching, studying, and grading in higher education. Nevertheless, the adoption of AI technologies by faculty is still uneven, influenced by cultural, ethical, and identity-related issues unique in universities. Several existing theories such as the Technology Acceptance Model (TAM) and Unified Theory of Acceptance and Use of Technology (UTAUT) have been highly useful for explaining technology adoption. But they fall short insofar as they are incomplete at capturing socio-cultural dynamics like academic freedom, the perceived threat to expertise, and how institutions govern themselves. In this study, we propose an extended TAM/UTAUT framework to explain faculty adoption of AI tools in higher education. new moderating constructs are conceptualized along with policy-level influences. A mixed-method design is suggested: qualitative interviews to develop constructs, and a large-scale survey to conduct investigations on them using structural equation modeling (SEM). The implications were illustrated here: academic freedom positively moderates perceived usefulness, while perceived threat to expertise negatively moderates behavioral intention to adopt AI. Ethics plays a mediating role in the transition from trust and institutional support to adoption. This study, therefore, extends acceptance theory to incorporate AI developments, and to give both theoretical theory and practical contributions by which institutional AI adoption strategies can be influenced.