Skip to main content

Posts

Showing posts from November, 2010

Filter design (2)

Input actual data Let's input some signal to the filter that we designed last time. Table 1 shows the input of constant signal. Constant signal will be boring, but, We start with a simple one. Table 1 Constant input To make sure, I will show you how to compute the Table 1's red output. Here y_n and n=1, In this case, the filter gets the first three inputs. The inputs are all the same (= 1), therefore all the outputs are also the same. Let's compute the transfer function. We compute all the time the transfer function in digital filter. What, Jojo, You! Be surprised. (it's a bit old and does anybody know Jojo's strange adventure by Araki Hirohiko?) The ratio of input and output is equal to the transfer function! Here is a small details. In the Table, y_n has a value at n=0,8 since we can compute the cos value. But there usually is no n=-1 value in the real data acquisition (If we start with n=0, then no data at n=-1). Therefore, it is also possible to say there i

Filter design (1)

This is about filter design of Hamming's digital filter. The chapter 3.9 is outstanding, so I would like to make a note. While ago there was a soccer game. How to suppress the Vuvuzela sound (but other sounds should pass) was a popular topic at that time. In this case, you analyze the frequency of Vuvuzela sound and design a filter which doesn't pass that frequency. The design of digital filter is: which frequency of the signal will be passed and which frequency of the signal will not be passed. See the following article for instance, http://blogs.mathworks.com/loren/2010/06/30/vuvuzela-denoising-with-parametric-equalizers/ In chapter 3.9, a simple non recursive filter design is described as an example. The filter looks like the following. This filter uses three samples. The design of filter is to determine the a,b to fit to your desire. First, we will calculate the transfer function of this filter. As I explained in a former blog post, a transfer function is an eigen

Introduction to Linear Algebra: Projection and a Simple Kalman Filter (3)

Simple Kalman filter In a sense, Kalman filter can predict a near future from the past and current condition. (Maybe this is a bit too much to say, there are of course only some extent and limitations.) Let's write it in a eqation.  x_{new} = x_{old} + f(x_{current}) This means that the future is somewhat the extension of previous state with a modification, and the modification is now hidden in the function f. OK, I am not an expert about Kalman filter, so, I will write down only a simple one that I could handle. In a simple Kalman filter, x_{new} is predicted by past x_i-s. The past data span a subspace of the data and the new value is just the best projection onto the past subspace. That how I understand it. (This might be wrong.) In the Strang's book, an interesting computation method is explained, but, it is just a sketch. Here I will write the overview of the idea of Strang's book. First, I need a preparation about 1/k. Therefore, We saw the last two blog en

Introduction to Linear Algebra: Projection and a Simple Kalman Filter (2)

Linear algebra way In the linear algebra way, the best x is a projection of right hand side onto the column space of the matrix. Because the solution is only possible in the A's column space (means the solution is only represented by the A's column vector's linear combination), the best solution is the projection of b onto the column space. Figure 1 shows the geometric interpretation. The matrix A and b are: Here A is a vector, so the projected best x is: The result is the same to the calculus way. This is also the average. (If you are not familiar with projection matrix, see the Figure 1. In the Figure, e is perpendicular with A, i.e., A^T e = 0. You can derive it from here.) Figure 1. Project b to A's column space My interest is why these are the same. Minimization of quadric is a calculus (analysis) problem, and projection is more a geometric problem. Is this just a coincidence? Of course not. They have the same purpose. As in Figure 1, projection is t

Introduction to Linear Algebra: Projection and a Simple Kalman Filter (1)

Introduction about linear algebra, calculus, and a simple Kalman filter I found interesting that the relationship among linear algebra, calculus, and a simple Kalman filter. So I will make a memo about this. Problem Given a simple observed data sequence (x_i), we can assume these are equally plausible and we can apply the least square method. In this simple setting, the outcome is just an average. You can find this topic in Gilbert Strang's an Introduction to Linear Algebra, chapter 4.2. For example, the observed data is [70 80 120]^T, if these are equally plausible,  x = 70  x = 80  x = 120. These euqations are strange. They look like, x is 70 and 80 and 120, simultaneously. Here, these are observed data, so they have some errors. But actually we observe the same data. The motivation is to find the best possible data we could  have from the observations. Therefore, the system is: There is no such x that satisfies the system. But, let us think about what is the bes

ifgi tracer: report 2010-11-12

I am slowly working on my small hobby project: ifgi tracer. Currently, I can load an obj file and rotate with trackball operation it in the window. So far, it is still OpenGL only. No light, No Zoom, No translation, no shading.... But, at least I don't have Gimbal lock. The source code is publically available, though, at this point, there is no value at all. But, I would like to have some fun on this project.

Eigenvalue and transfer function (8)

Last time I use sin and cos, but this relationship becomes simpler if we use Euler's formula. Let's apply the same operator T. Wow again. This is also eigenfunction of operator T. This function is based on trigonometric functions. Therefore, we use these trigonometric function as the basis of the frequency domain analysis. Eigenvalues show us a brief overview of the operation and its function. Assume we have an input x and an output y, operator T is applied to the input x, then the result is the y. If eigenvalue exists, we could write it as the following. This means, the input is transfered to the output and how much transfered is λ. Therefore, signal processing people call this λ as a transfer function. Why it is called function? λ looks like a constant. Usually, λ is not a function of input x, but it usually has some parameter, means this is a function. For example, in the former equation, λ is not a function of x, but a function of ω.  In signal processing, x is usual

Eigenvalue and transfer function (7)

Eigenvalue and transfer function In the Hamming's book, he repeats to mention about the merit of using trigonometric function as a basis in signal processing. Unfortunately, that is not the main point of this blog, therefore, I could not compare it with the other bases. I will just stick to this basis with assuming this is a good one. Let's see the eigenvalue of trigonometric function according to the example of Hamming's book. The first example in his book is  A sin x + B cos x. We apply a transformation operation. Then, let's see something is changed or not. If something doesn't change, it will be a eigenvector and we will also see its eigenvalue. Transform T is a shift operation of the origin of coordinate like T: x → x' + h . Why someone wants to shift the coordinate? For example, signal processing usually doesn't matter when you start to measure the signal sequence. When you started to measure the signal, then, that point is the origin. Usually the

Eigenvalue and transfer function (6)

Eigenvalue and Eigenvector Function case Interestingly, the same story is repeated again in function. (Well, ``interesting'' is just my personal feeling. So, many might not agree with this. I found this --- the same story repeated again, but in the different level --- interesting in mathematics. Like Hitchhiker's Guide to the galaxy's jokes have some mathematical structure.) So far, we apply an ``operation'' to a scalar or a vector. Then, we again apply an operation to a function. We want to know what is the substance of the ``operation'' instead of the each result of operation. We could not know the substance at once, but we could know the response of the function with an operation. Usually, a function is an operation to a scalar or a vector, therefore, it is a bit confusing to think about an operation on an operation. Let's see an example. Let's assume a function f and a scalar or a vector x , this function f can be an operation on x ,

Eigenvalue and transfer function (5)

Eigenvalue and Eigenvector Vector case Next, let's think about vector. I assume the readers know about matrix multiplication a bit. When a matrix applies to a vector, then this generates a new vector. For example, a matrix can rotate a vector, or enlarge/shrink a vector. We say we can apply a matrix A to a vector x . The matrix A could be a rotation operation, or any. The result of application creates a new vector b .  A x = b If you see this, it looks like scalar's multiplication. But, usually A is quite complex and hard to understand what it is. But if there is a scalar λ and a vector x' , such that  A x' = λ x' . This means: We can replace a complex matrix A with a scalar value λ.  I think this sentence has the whole idea of this topic. Usually it is not possible to find such λ for any vector. But, we have a chance to find a specific x' and its relating a scalar value λ. When I saw this, I said, ``Wow.'' This is a powerful idea. If I

Eigenvalue and transfer function (4)

Eigenvalue and Eigenvector Scalar case Let's start with multiplication of scalars. 3 x 4 = 12 If I could use variables a,x,b: ax = b. The first example shows x=4 times a=3 equals b =12. Here, I multiply x a times. I didn't multiply a times x. In the scalar case, these have no difference since the following commutative law works on scalars. ax = xa Please note, this is not always correct. Even we can not exchange the meaning in the scalar numbers. For example, assume there is a chocolate box that costs five Euro. We can buy two packages. This is 2 times 5 = 10 Euro. This is not two's 5 Euro time. We can double the 5 Euro chocolate, but we can not see five Euro time doubles. Usually it doesn't make sense: five Euro times (five times works, five Euro times has a problem). So I remind the order of operator is also important since vector is more strict about the operations.

Effective STL

For long time, I know I should read this book. This time finally I read Effective STL by Scott Meyers. Allocator is way to difficult for me, but others, like usage of remove() and erase(), algorithm's topics are quite interesting and useful. Highly recommended.

Eigenvalue and transfer function (3)

Function Here I want to put matrix according to the order of this story, however, I will skip the matrix story for simplicity. Vector is a sequence of scalars. The sequence order of scalars is very important. If we changed the order of scalars, they are totally different vectors. For example, we had a vector that represents position, it was [direction distance] that means 50 degree from north, distance 5 km. If we exchange the direction and distance, it becomes [distance direction] that means 5 degree from north, 50 km distance. That is the different position (at least in the Euclidean space). If we can put 4 scalars. Figure 1's the second from the top shows the vector that has 4 scalars. A scalar is one (real) number and a vector is a sequence of scalars. If we add more scalars in a vector, what happens? If we add scalars more and more, infinite number of scalars are added, then it becomes a function as shown in the Figure. OK, I cheated here a bit. We need some more prerequ