Quick Intro
In this tutorial, we introduce how MAIC works in practice. To cater to various user roles and system functionalities, this tutorial is structured into four distinct levels, each delving deeper into the system’s implementation details.
Users
As manuals in documentation might be not as friendly to most users as an onlien document, we provide User manuals in the form of FeiShu documents.
Teacher: How To Convert Course Contents Into MAIC Classroom
In the current release, MAIC is only available for specific courses. To request the manual, please email the MAIC management team.
Student: How To Take a Lecture Using MAIC
MAIC, with clickable link within the website, has a user manual.
Researcher and Developers
To better adapt to the needs of different kind of researchers and developers, this section will be oriented with two major focuses:
AI, LLM and Agent Researchers: We introduce how the flow of LLM/Agent related algorithms work among the system and how we connected the teacher and student end methods via an unified data, denoted as
Agenda
. For those who are interested to implement their own classroom with customized Functions/Algorithms, we also provide a simple yet easy-to-use guide for helping such researchers to find the code they are looking for.Edu+AI and Cognitive Scientists: We are aware that data in our system could be observed to possible motivate the study of cognitive and Edu+AI researchers. Thus on top of releasing the code for customizable launch MAIC, we also release the Classroom-Replay and Agent Debugging Tool we implemented for our feedback team and our Education/Cognitive scientists in our team.
AI, LLM and Agent Researchers
How Does The Algorithms Operate?
In this subsection, we discuss what algorithms are included and such algorithms join together to make MAIC’s core functionality works.
Before diving into the details, it might be helpful to take a look at the design philosophy of teacher and student end framework as a whole.
Teacher |
Student |
|
---|---|---|
Goal |
To help teachers create AI courses without too much human effort |
To allow students to customize their classroom |
Priority |
Safety > Personalization > Latency > Precision |
|
Latency |
Offline |
Realtime |
Complexity |
High |
Low |
Input |
Raw Teaching Material |
Teaching Intention + History Activities |
Output |
Teaching Intention |
Response: Activity + Agent |
System Load |
Light |
Heavy |
User’s Focus |
Entire Lecture’s Flow Control |
Chat-Round Level Performance |
In an online system, integrating engineers, particularly AI scientists, to process and update the extracted Teaching Intention directly for students is expensive. To address this, we’ve designed an intermediate data structure called “Agenda.” This structure acts as the output from the Teacher’s side and the input to the Student’s side, guiding the flow and structure of the classroom effectively. For a deeper understanding of the Agenda’s role and structure, please refer to our detailed introduction on the Agenda.
To cut it short, Agenda is in the form of a tree where each node contains several lecturing activities to be practiced during class. We denote such activities as Function
s.
Where To Find The Code I Need?
We maintain our latest released codes and provide introduction to framework of the system.
Our current project is assembled as service workers to support easy scaling. This is helpful as it allows us to align our algorithm’s storage units (e.g. MongoDB) in the same organization as our code repositories, potentially helping developers to faster navigate the corresponding data collections.
Current service workers are divided into three categories:
LLM Workers. Workers that calls/handles the requests posted to the LLMs.
PreClass Workers. Workers that are responsible for generating a lecture according to the given pptx seed file.
InClass Workers. Workers that provides InClass interations.
Edu+AI and Cognitive Scientists
We are currently working at our best efforts to handle our data, aiming to provide the data produced via our system for the vast research community. This will be released soon once we are for sure that the data are released under sufficient ethical and quality standards.