WHEW. WOOOOOOOOOOWIEE. As this unusual yet unique semester comes to an end, I wanted to look back at the things I've learned as a sophomore. It's insane to think that Junior Year is approaching, and I think the best way to commemorate this school year is to go through the lessons/the things I learned about, especially because of how the method of learning has changed because of these challenging times. Also, to the class of 2020, CONGRADS!!! I wish you all the best for your future, and I know you are going to rock it! So I learned lots of things from all my classes (Of course!!), but I think my favorite class by far was the Systems and Signals class. The concepts I learned in my signals class were so applicable to real life scenarios! For instance, when you send a message to another device in your WiFi, an electronic signal transmits the bits of info (data) to the router which transmits It to the receiver! Signals are everywhere! So, these were some of the key points I felt were pretty 'special' to me this semester: 1. Frequency Domain vs. Time DomainUnderstanding the two different types of domains in signals and systems is important and very interesting! The time domain graph shows how a particular signal (waveform, sinusoid) changes over time. "Frequency" is essentially the number of times a particular instance of a signal occurs. So, the frequency time domain graph shows how much of the signal is in a frequency band over a range of frequencies. 2. Lesson on Fourier transformThis topic was very intriguing, and I wanted to dedicate to it in a separate post (coming soon!!). 3. Thinking "outside" of the box ProblemsThere were many problems in my classes that were challenging to solve, but one particular problem that comes to my mind was in my Digital Logic Design class. We were asked to design a grading system, using only a certain number logical components (a 2-by-1 MUX, 2x4 Decoder, and 2 Comparators). Since there was a restriction as to how many digital logical components you can use, I couldn't just use as many components as I wanted, wherever I want. I needed to understand the function of every component and place them in the appropriate place. It was a tough one, since I had to essentially "experiment" with every possible combination placement of these components. I only had two comparators, and I had to decide whether the input number was a grade A, B, C or D. It was a tricky one, and I finally had to see the answer because my brain was literally about to explode lol. But as soon as I saw the solution, I finally had my AHA moment and realized that I could use the MUX as a component to restrict the possible range of our input. Thinking outside the box is all about understanding the function and using it where it is the most applicable (especially if you don't have that many resources). 4. Real Life Scenario ConnectionsReal Life Scenarios. Super important, especially in my two EE classes. Digital Logic Design is all about the digital components that exist in our computers today, and we essentially looked into the several smaller components that make up these important parts. It's just insane to learn about what an SRAM or DRAM (new posts, maybe?!) looks like, since we have been hearing these words forever but I never really understood what they meant (or how they looked liked) UNTIL NOW!! (no flex) 5. ReflectionsI'd like to end it off by saying, its been a great and CRAZY year. I know it sounds cliché, but sophomore year was a pretty valuable year to me. I actually learned A LOT, and not just about academics/electrical engineering/computer science, but about life. How important it is to be a be a good citizen, stay home, help others, and help yourself of course. How life can give you the most unexpected turns, and how you should just keep on going, and keep on moving. I hope you enjoyed your year as a freshman/sophomore/junior/senior, and once again thank you for reading! I wish you the best, and GOOD LUCK!
Thank you, Aarushi Ramesh :)
0 Comments
Fourier Series. This was the super annoying memorizing-the-formulas topic in my class, but it was actually one of the most interesting math concepts ever. So what is the Fourier series? What's the significance of it? For that we first have to understand what a periodic function is. A periodic function is a function where T > 0, and f(x+T) = f(x) for every value of x. The T is the period of f(x). An example of a periodic function is sin(x) and cos(x), which have a period of 2π. Fourier series is essentially a way to expand this periodic function to an infinite series involving a bunch of sines and cosines. So how do you derive the Fourier series of a periodic function? Let p > 0 and f(x) be a periodic function with period 2p, within the bounds of (-p, p). The Fourier series of f(x) is: where the a of n and a of 0 and b of n are Fourier coefficients ALSO, there are two things to keep in mind: assuming x is an integer,
the left graph shows the original f(x) and the right graph shows Fourier series estimation graph. When you graph the certain number of n terms of the fourier series, you will get a close approximation. This series is very similar to Taylor series, except Fourier series also works with discontinuous functions as well. Fourier sine seriesYou can obtain the Fourier sine and cosine series from the general formula. For the Fourier Sine Series, we assume that f(x) is an odd function, which means that f(-x) = -f(x). If thats the case, then the a of 0 and a of n terms become zero because an odd function (f(x)) multiplied by an even function (cos(nπx)) = an odd function. An interval from -p to p over an odd function is 0. an example of an odd function over an interval (-p, p). both the areas cancel each other out, which evaluates the integral to 0. Since a of n and a of 0 are both equal to 0, the function evaluates to the general Fourier series with just the b of n Fourier coefficient. Fourier cosine seriesYou obtain the Fourier cosine series when you assume that f(x) is an even function; which means that f(-x) = f(x). Since f(x) is an even function, we know that the b of n term in the general Fourier series equation is 0 because an even function (f(x)) times an odd function (sin(nπx)) is equal to an odd function. The integral from (from -p to p) of an odd function is always zero since the areas cancel each other. Therefore, with only the a of n an a of 0 terms, the Fourier series becomes the Fourier cosine series (with only cosines).
I never actually really understood the significance of eigenvalues and how it's applied visually until my Diff Equations class was over, which is not the best time to figure it out, but at least it'll help me for my future classes, lol. And, its actually pretty cool too!! Honestly, I think the time when I got super interested in learning about why these eigenvalues existed was when this scene came up in the Avengers Endgame. When Tony Stark mentioned the word "eigenvalue" I was like, "OMG, OMG I actually kinda know this...kinda...that word is very familiar to me so it counts." LOL, yup, this is when I was like hmm I have to actually understand an eigenvalue's significance and application. So wow, that happened and I never would have expected an Avengers movie to squeeze in a quick lesson in Diff Equations and Linear Algebra. Anyways, getting back to the point, what is an eigenvalue? And what are these used for? For example, when you solve for an IVP (initial value problem) with matrices, you solve for the eigenvalues first by finding the determinant, and then you solve for the corresponding eigenvectors (from the eigenvalues), and then plug in the initial value equation to the general solution to find the value of the constants. That sounds terrible, but it's actually not too bad. It's basically just solving for linear equations except in a matrix you have to find the determinant. That's one way of using eigenvalues. But what are they? and what's their real world application? Eigenvalues and their corresponding eigenvectors summarize matrix data. Eigenvectors are vectors whose direction is not changed when some linear transformation is applied to the vector. For instance, The red vector is an eigenvector since it never changes, even after a linear transformation has been applied. These vectors define the matrix of the transformation (scaling). For any matrix A (n x n square matrix), x, (a n x 1 vector) is an eigenvector of this matrix if the product of Ax is proportional to the product of x * eigenvalue. where x is the eigenvector, A is the matrix, and the lambda is the eigenvalue. When you solve for lambda using this formula, you will end up with: For this to be equal to 0, A has to equal lamda times I, where I is an identity matrix with the same dimensions as a, and lamda (eigenvalue) is just a scale factor. However, we are assuming that x is not a null vector, which means to satisfy the equation, A - lambda * I can't have an inverse. A matrix that is non-invertible has a determinant of 0. Therefore, we can conclude that, And we just use this equation to solve for eigenvalues of any n x n matrix.
Eigenvalues and eigenvectors of a certain matrix have tons of real world applications such as image compression, clustering in data science, predictions and page rank algorithms. |
Archives
May 2021
Topics
All
|