bmatrix
AI-University: An LLM-based platform for instructional alignment to scientific classrooms
Shojaei, Mostafa Faghih, Gulati, Rahul, Jasperson, Benjamin A., Wang, Shangshang, Cimolato, Simone, Cao, Dangli, Neiswanger, Willie, Garikipati, Krishna
We introduce AI University (AI-U), a flexible framework for AI-driven course content delivery that adapts to instructors' teaching styles. At its core, AI-U fine-tunes a large language model (LLM) with retrieval-augmented generation (RAG) to generate instructor-aligned responses from lecture videos, notes, and textbooks. Using a graduate-level finite-element-method (FEM) course as a case study, we present a scalable pipeline to systematically construct training data, fine-tune an open-source LLM with Low-Rank Adaptation (LoRA), and optimize its responses through RAG-based synthesis. Our evaluation - combining cosine similarity, LLM-based assessment, and expert review - demonstrates strong alignment with course materials. We also have developed a prototype web application, available at https://my-ai-university.com, that enhances traceability by linking AI-generated responses to specific sections of the relevant course material and time-stamped instances of the open-access video lectures. Our expert model is found to have greater cosine similarity with a reference on 86% of test cases. An LLM judge also found our expert model to outperform the base Llama 3.2 model approximately four times out of five. AI-U offers a scalable approach to AI-assisted education, paving the way for broader adoption in higher education. Here, our framework has been presented in the setting of a class on FEM - a subject that is central to training PhD and Master students in engineering science. However, this setting is a particular instance of a broader context: fine-tuning LLMs to research content in science.
- North America > United States > California (0.14)
- Oceania > Australia > Victoria > Melbourne (0.04)
- North America > United States > Alaska > Anchorage Municipality > Anchorage (0.04)
- (4 more...)
- Education > Educational Setting > Higher Education (0.88)
- Education > Educational Setting > Online (0.68)
Covariance matrix - Wikipedia
In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of a given random vector. Any covariance matrix is symmetric and positive semi-definite and its main diagonal contains variances (i.e., the covariance of each element with itself). As an example, the variation in a collection of random points in two-dimensional space cannot be characterized fully by a single number, nor would the variances in the x {\displaystyle x} and y {\displaystyle y} directions contain all of the necessary information; a 2 2 {\displaystyle 2\times 2} matrix would be necessary to fully characterize the two-dimensional variation. Some statisticians, following the probabilist William Feller in his two-volume book An Introduction to Probability Theory and Its Applications,[2] call the matrix K X X {\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {X} }} the variance of the random vector X {\displaystyle \mathbf {X} }, because it is the natural generalization to higher dimensions of the 1-dimensional variance. Others call it the covariance matrix, because it is the matrix of covariances between the scalar components of the vector X {\displaystyle \mathbf {X} } .
- Information Technology > Artificial Intelligence (0.69)
- Information Technology > Communications > Social Media (0.40)
- Information Technology > Communications > Collaboration (0.40)
CamCal 011 Fundamental Matrix - Master Data Science
Highlights: In this post we will learn about fundamental matrix and we will continue our series about stereo vision. In the last post we concluded that if we have enough points we should be able to figure out the constraints for the epipolar line. So for this, we will need to calculate a fundamental matrix. In previous posts we have developed the relationship between two images obtained with two calibrated cameras where we have actually known the rotation and translation parameters between them. In particular we defined the essential matrix which related between world points of two calibrated cameras.
CamCal 007 Camera Calibration - Master Data Science
Highlights: In this post, we will explain the main idea behind Camera Calibration. We will do this by going through code, which will be explained in details. In a last few posts, we have talked about modeling a projection, perspective projection, camera translation and rotation and all of this stuff will be of a great importance for you in order to understand this post. So, if you missed that, jump back and prepare for programming. Basically the main idea of camera calibration is to find parameters that would help us to solve some problems, which we will be doing here.
Understanding Linear Regression
Linear regression is a regression model which outputs a numeric value. It is used to predict an outcome based on a linear set of input. As you can guess this function represents a linear line in the coordinate system. The hypothesis function (h0) approximates the output given input. A linear regression model can either represent a univariate or a multivariate problem.
Linear Algebra: Linear combination of Vectors - Master Data Science
Highlights: In this post we are going to continue our story about vectors. We will talk more about basis vectors, linear combination of vectors and what is the span of vectors. We provide a code examples to demonstrate how to work with vectors in Python. Let's talk about vectors in more details. Vectors are related to pairs of numbers that we call coordinates.
CamCal #000 Perspective Imaging - Master Data Science
Highlights: In this post we're going to talk about perspective imaging. First, there is a little bit of math that is needed for the explanations of the geometry and the configuration of the camera. Second, we will use a simplified pinhole camera model. Hence, we will not talk about focus and other "non-pinhole effects" when the rays are not in focus. When we take a photo, our 3D world is mapped into a 2D image.
Calculus for Machine Learning (7-day mini-course)
Calculus is an important mathematics technique behind many machine learning algorithms. You don't always need to know it to use the algorithms. When you go deeper, you will see it is ubiquitous in every discussion on the theory behind a machine learning model. As a practitioner, we are most likely not going to encounter very hard calculus problems. If we need to do one, there are tools such as computer algebra systems to help, or at least, verify our solution. However, what is more important is understanding the idea behind calculus and relating the calculus terms to its use in our machine learning algorithms.
#007 Linear Algebra - Change of basis - Master Data Science
In the following image we can see an alternative basis for one coordinate system and those are basis vectors \(\vec{b}_{1} \) and \(\vec{b}_{2} \). On the other hand, in this different alternative coordinate system it is represented with coordinates \(-1 \) because that's how much we have to scale vector \(\vec{b}_{1} \) and it's scaled with \(2 \) along \(\vec{b}_{2} \), cause that's how we much we have to scale our \(\vec{b}_{2} \) vector.