2010-07-29

Computer graphics for all by Takeo Igarashi

This issue of Communications of the ACM has an article by Igarashi, Takeo.

http://doi.acm.org/10.1145/1785414.1785436

I liked this article since I can see the overview of his research history. If you allow my presumption to mention his research: his research has full of fun demos, is practical, is developing solid base technologies. They are great. I think people like his full of dream research. Yes, I also like that. But, I especially like the technology oriented papers from his team which I feel less high-lighted from the community. One example is SmoothTeddy. It seems that work is not treated well like as Teddy. But, I think SmoothTeddy has a significant technological contribution for mesh optimization for interactive sketching systems. I like his technical contribution like those papers.

2010-07-28

A personal annotations of Veach's thesis (12) P.71

2.8.2 Regression methods

Section 2.8.2's Equation (2.33) was a mystery for me at first. I asked this to a few specialists, but they told me that is not so important, and I should go forward, the good ones are coming... Therefore, this annotation might not so helpful, but I like this straightforward idea.

The hint of understanding of this Equation is in the paper: ``Equation (2.33) is the standard minimum variance unbiased linear estimator of desired mean I, ....'' This means he used Gauss's least square method. Most of the part of Veach's paper is self contained and easy to understand, but, personally I would like to have one more equation --- Equation (1) --- here.


 This means each final estimation F is equal to the sample mean \hat{I}. Veach might think this is too obvious and he didn't feel to write it down. But it surely helps for dummies like me. We can derive Equation (2.33) from Equation (1), so let's try that.

First of all, Equation (1) has usually no solution. If so, the estimator can always estimate the exact value of the sample mean. I write transpose of X as X^* following the Veach's notation, then apply the Gauss's least square method. This is Equation (2). But I would say this is too naive. In Veach's paper, he suggest more sophisticated method that uses variance covariance matrix \hat{V} as a weight for the least square.

Using \hat{V}, Equation (3) is the base equation. I suppose that the weight is an inversed matrix since we want to minimize the variance. The rest is the standard least square method, then we obtained Equation (4).  This Equation (4) is exactly the same to Equation (2.33). In this way, now we see this equation is nothing mysterious, but a natural one.

Real Professionals

Yesterday, I read a modest article, yet impressive for me. A staff of MAFF (Ministry of Agriculture, Forestry and Fisheries) found an iniquity from one receipt and and protect a lot of people's health. This is a real professional.

http://www.asahi.com/national/update/0726/TKY201007260261.html

If I believe this article, magnificent. I thank the staff in MAFF. Also, the people support him/her. Since sometimes one found something, it doesn't come up to the managers. (Some managers sometimes corrupted and later it becomes a big scandal.) This time, it seems MAFF also has a part of responsibility, but, they did it right. I hope these are properly evaluated and awarded.

I have an impression on yesterday's articles as not a usual day: A son of a drag king tries to propitiate with his father's issues, 15 years ago's unsolved murder, an American (living in Berlin) is against Hiroshima's anti-nuclear movement ceremony in Potsdam, and so on.
http://www.asahi.com/national/update/0726/TKY201007260231.html
http://www.asahi.com/international/update/0719/TKY201007180475.html
http://www.asahi.com/international/update/0725/TKY201007250377.html

2010-07-27

A personal annotations of Veach's thesis (11) P.67

2.7.2 Russian Roulette

Russian roulette methods is one of sampling techniques. It selects sample path stochastically. This includes stochastic ray termination. The light transport equation is an integral and the equation has several terms. The magnitude of the term depends on the sample. If the term is quite small or the material's reflectance property said one direction is not so probable, sampling such term or direction has less effect, yet still costs the same. But, if we don't sample such term at all, the solution will be biased. Because it is always possible that the light comes from that direction.


Figure 1. Russian Roulette: (a) non-Russian roulette (splitting) ray tracing, sample many directions when sampling a glossy surface, (b1)  Russian roulette sample, sampling only one direction depends on the probability, (b2) Russian roulette stochastic termination (c) a trace path is also chosen and terminated by specific probabilities.


Figure1 (a) is an example of non Russian roulette sampling method (splitting). A glossy surface is affected from incoming light from many directions, therefore we need sample many directions. The method (b1) is a Russian roulette method and only samples one direction per one incoming ray. The direction of sampling decided stochastically. Or like (b2) case, a ray is terminated stochastically. You might say, if only one direction is sampled, how the glossy surface can be described. To see around, we need to cast many rays at each pixel. In the case of (b1,b2), one ray continues to sample only one direction, therefore there is no exponential growth of computational cost. (c) shows a stochastically created path and also it is stochastically terminated.  This is the Russian roulette method. In this method, even when the ray hit a transparent/reflective surface, we sample only one transparent or one reflective direction (or terminated). This method needs a lot of samples for each pixel, but, each path calculation time is not so drastically changed by path.  This is an important technique for an unbiased algorithm.

Acknowledgements:
Thanks to Leo for comments and discussions.

2010-07-24

A personal annotations of Veach's thesis (10) P.60

Star discrepancy D_N^*



I might say, the goal of Quasi-Monte Carlo method seems to have a sort of contradictory. It is:
  1. To minimize the irregularity of distribution
  2. Not using regular uniform samples to avoid aliasing 
To minimize the irregularity of distribution, a straightforward answer is using regular distributed sampling. But, this causes aliasing that is the point to use the Monte-Carlo method (Note: not Quasi-Monte-Carlo).

A discrepancy is a quantitative measure of irregularity of distribution. This measure tells us how much integration error occurs.

The star discrepancy D_N^* is a discrepancy. When we sample many boxes that always includes the origin, it shows the maximum error of estimation of the area of these boxes.

In the case of one dimension, boxes are lines. Therefore we estimate the length of line that includes the origin by sampling. If we sample N points, if we sample regular uniformly, we could not sample less than 1/(2N) length correctly. This is the same as the sampling theory. But, if we go to the high dimensional area (s-dimensional area), the error bound becomes O((log(N)^s)/N).

I don't understand why it is O((log(N)^s)/N). It is s-dimensional generalization, but, it even doesn't work one dimensional case. Some told me there is a lengthy complicated proof. Figure 1 shows the plot. When number of samples are small and dimension is high, then the error becomes larger. This is the growth ratio of log(N)^s and N, someone says it is obvious, but I would like to see that anyway.

This is a program to show this plot. This is matlab/octave language.

%
% visualize star discrepancy
% Octave code: 2010 (C) Shitohichi Umaya
%
x = [10000:100:1000000];
s = 2
y1 = power(log(x), s)./x;
s = 3
y2 = power(log(x), s)./x;
plot(x,y1,"1;(log N)^s/N, s=2;", x,y2,"3;(log N)^s/N, s=3;")

Acknowledgements:
Thanks to Alexander K. to answer me what is the discrepancy.

2010-07-19

Sierpinski tetrahedron

While ago, I have a blog about a template error. I finally finished it today. I made a Sierpinski tetrahedron generator. http://en.wikipedia.org/wiki/Sierpinski_triangle
This time, I would like to learn OpenMesh (http://www.openmesh.org). I have my own mesh library, but usually I would like to solve a problem, so if there is a well developed open source library, I think it is better to switch to it. Figure 1 shows how to get consistent list of vertices by OpenMesh. Here, "consistent" means, I always would like to have the vertices indexed as in the Figure 1, up. For example, if O_1 and O_3 are exchanged, I have a problem to create a Sierpinski tetrahedron.

Figure 1: tetrahedron configuration


Figure2: creation rule 1
Figure 3: creation rule2


Actually, I don't know what is the best way to do that, I depends on the OpenMesh's face halfedge circulator. If anybody knows better way, please give me a comment. In my method, first I need to guarantee the input is a tetrahedron. This is done as following:

1. Pick one face, f0.

2. Get 1-ring neighbor faces of f0 by the face face circulator. Now    we have four faces (f0, f1, f2, f3). If we could not get four,    there is a boundary. But, a tetrahedron has no boundary, therefore    f0 is not a tetrahedron face.

3. For all {f0, f1, f2, f3}, get 1-ring neighbor faces and check    all are one of {f0, f1, f2, f3}. If we found an unknown face, this    is not a tetrahedron. Actually, f1, f2, f3 are obtained by f0's    face face circulator, therefore, we don't need to check the f0.

When we got a consistent vertex list, follow the Figure 2 and 3 rule to generate the Sierpinski tetrahedron. The OpenGL rendered result is shown in the following figures. This is nothing new and nothing complicated, but, I have not program anything related with a mesh, it was fun to do that.

2010-07-17

A conference schedule table generator program

I use a conference schedule table for a long time. A few months ago, a blog http://www.realtimerendering.com/ wrote about that schedule table and there was some discssion how to improve the table. This table is optimized to look up the next closest deadline date. So when I got a rejection of my paper, I looked up this table.

After the discussion, the author of the table generator program made the program free software. You can download the source code from http://code.google.com/p/gen-conference-schedule-table/, new BSD license.

I downloaded the program and run the unit test script, then you can see the following figure. A html image map file is also generated and if you put it into a html file, the image is a clickable map. (but not in this blog...)






I have one more wish that Eurographics's conference schedule table has this figure.
http://www.eg.org/EG/News/upcomingevents
http://confcal.vrvis.at/

2010-07-16

A personal annotations of Veach's thesis (9)

Page 50: Stratified

I was not familiar with the word `Stratified.' A Monte-Carlo method usually uses a pseudo random number sequence. A random number sequence usually should have a property, `unexpectedness,' this means a same number sequence like 1, 1, 1, 1, 1, ..., 1 could be also a part of random number sequence. This issue is well explained in Knuth's book (D.E.Knuth, The Art of Computer Programming, Vol.2 Random numbers). The sampling pattern could be the left of Figure 1.

 Figure 1. Stratified example. Left example of random 12 samples (clumping problem), Right example of 2x2 stratified random 12 samples

In Figure 1 left, some part is not well sampled, or some part is oversampled -- the sampling pattern is biased. This is clumping, a bad luck in a sense. Usually it is solved when the number of samples is increased, but, it computationally costs, means it takes more time to compute. To solve this clumping problem without losing performance, the sampling region is subdivided --- or stratified. In this way, even we have a bad luck, still we could sample more uniformly. This method is called a stratified method. Readers might wonder, ``then why don't you sample uniformly?'' Uniform sampling can avoid clumping problem, but, the uniform sampling causes another problem, ``Aliasing problem'' by the sampling theorem. My friend said that ``the Quasi-Monte-Carlo method is a good stratified method.''



Acknowledgements

Special thanks to Daniel S. and Leonhard G. who explained me what is the Stratified method.

2010-07-14

Fighting with a template error (as usual)

I would like to make a program with OpenMesh (http://www.openmesh.org). I tried to test some examples that worked while ago, but now they don't work by the following compile error (template instantiation).
/usr/X11R6/bin/g++ -Wp,-MD,Ubuntu9.04/VisOMTriMeshDNode.dep -DHAVE_SSTREAM -DUSE_GMU_GERR -std=c++0x -Wall -Wnon-virtual-dtor -Woverloaded-virtual -DARCH_LINUX-DCOMP_GCC -I/usr/X11R6/include -I/usr/X11R6/include -I/usr/include/tcl8.4 -I/usr/include/qt4 -I/usr/include/qt4/QtCore -I/usr/include/qt4 -I/usr/include/qt4/QtGui -I/usr/include/qt4/QtNetwork -I/usr/include/qt4/QtOpenGL -I/opt/OpenMesh/src -I/home/project/shared_proj -fPIC -g -Wno-uninitialized -D_INCTEMP -DUSE_GMU_GERR -DGMU_DBG_STRIPPATH=\"ALL\" -o Ubuntu9.04/VisOMTriMeshDNode.o -c VisOMTriMeshDNode.cc
/usr/include/c++/4.3/ext/new_allocator.h: In member function 'void __gnu_cxx::new_allocator<_Tp>::construct(_Tp*, _Args&& ...) [with _Args = long int, _Tp = OpenMesh::BaseProperty*]':
/usr/include/c++/4.3/bits/stl_vector.h:703:   instantiated from 'void std::vector<_Tp, _Alloc>::push_back(_Args&& ...) [with _Args = long int, _Tp = OpenMesh::BaseProperty*, _Alloc = std::allocator]'
/opt/OpenMesh/src/OpenMesh/Core/Utils/PropertyContainer.hh:104:   instantiated from 'OpenMesh::BasePropHandleT OpenMesh::PropertyContainer::add(const T&, const std::string&) [with T = OpenMesh::Attributes::StatusInfo]'
/opt/OpenMesh/src/OpenMesh/Core/Mesh/BaseKernel.hh:133:   instantiated from 'void OpenMesh::BaseKernel::add_property(OpenMesh::VPropHandleT&, const std::string&) [with T = OpenMesh::Attributes::StatusInfo]'
/opt/OpenMesh/src/OpenMesh/Core/Mesh/ArrayKernel.hh:486:   instantiated from here
/usr/include/c++/4.3/ext/new_allocator.h:114: error: invalid conversion from 'long int' to 'OpenMesh::BaseProperty*'
I remember I was able to compile this example. Why I can not now? Maybe my environment is updated. So, first I updated OpenMesh library from R3 to R5. But this changes nothing. This error is template instantiation error, which I don't like.  I read the source code for three hours with wondering: "Where this 'long int' comes from?" Finally I realized the g++ compile option, "-std=c++0x" is the reason. I totally forget I added this option since hash_map is not standard and will be unordered_map in some point. Today was another unproductive day...

2010-07-11

Ryouma ga yuku by Shiba Ryoutarou

I finaly finish to read "Ryouma ga yuku" by Shiba Ryoutarou. My problem is how to get the books. This time, my friend's sister visited Germany, I asked her to carry the last two volumes. I spent the whole Saturday to read them. It was so exciting and I couldn't stop to read them. I don't so prefer other Shiba's books, therefore I haven't read this until now. What a stupid I am. I didn't understand Shiba's book when I was a high school student. Maybe I was not enough mature to understand them.

面白き,こともなき世を,面白く,
住みなすものは心なりけり. 高杉晋作
(Even if the world has no fun, our mind is a possibility to make it fun.
 Takasugi Shinsaku) (This is just one of possible translations. I cannot translate this well.)

常日頃,闇に覆わる我が心,
光となるは,竜馬の言葉. 山内斉
(Every day, my mind is easily covered by darkness and depression,
 Ryouma's word lit in my mind. Yamauchi Hitoshi)

2010-07-10

Visiting conferences

My hobby as a Sunday Researcher is visiting a conference. Since many of my friends are now all over the world, it is easier to visit a conference to meet some of them. Also conference gives me a lot of ideas and motivation.

I visited to Eurographics 2010, HPG 2010, and SGP 2010 this year. This hobby is an expensive one for a person, since I paid everything and needed to take vacations. The pictures are snapshots of SGP2010, a conference place and a city hall in Lyon. There are some conferences in France this year, SMI, Curves and Surfaces, and SGP. I chose SGP since I know some of my friends will be there in advance.

When I visited to a conference, one thought always came to me: I am not cool. OK, I was not cool all the time, but at least I tried before. I did not present any new ideas, I just was there and got ideas. I doubt my life's meaning, this is sad... Yet, I could do something, maybe. I should do something.

2010-07-09

Unnecessary complexity

This is a picture of my friend's toilet (Nancy, France). There should be heater's in and out, toilet's in and out, and a basin's in and out. But, water in will be shared and water out are as well. Though, I don't understand how it becomes so complex. I have a impression that I could find easily such unnecessary complex things in France.. It is just my impression. Well, I experienced one of the train system in England was also unnecessary complex.


2010-07-06

2.5 LU decomposition (3) Why do we do LU decomposition?

Finally I can conclude LU decomposition. If you are not interested in linear algebra, unfortunately this is not for you.





Until the last two blog entries, we say LU decomposition is just an elimination, but keeping the elimination matrices as the inverse of them. As I understand, LU decomposition has two advantages.

The first point is, L and U are triangler matrix. A triangler matrix is the easiest matrix to be solved by back substitution. For example, if we want to solve the following triangler matrix system,
this is simple. I rewrite this as simultaneous equations:

Equation 12 shows directory x=5. Equation 13 is x + y = 2, we know x=5, therefore, y=-3. Equation 14 is x + 2y + z = 2, therefore, 5 - 6 + z = 2, then z = 3. We don't need any elimination process. Actually, we have done elimination to get this triangler matrix, therefore, we don't need to wonder. It is so easy to solve a triangler matrix.

The second point is LU decomposition needs only matrices. Recall the elimination needs augmented matrix to get a solution. LU decomposition doesn't have that. When we want to solve the system with many right hand sides, LU decomposition needs only one decomposition as long as the system stays the same, we could repeat back substitution for each right hand side, that is simple.

As a conclusion, the reason of doing LU decomposition is: L and U are both triangler matrix and easy to solve, for each system, we need only one decomposition and back substitution. If only the right hand side changes, we don't need perform decomposition again. LU decomposition has these two advantages compare to pure elimination.

Now we solve the system by LU decomposition. We use the following observations:

This is an example system.







LU decomposition has been done by elimination. We just solve two systems and both are triangler matrices. We have already seen how simple to solve the triangler system. This is why LU decomposition has an advantage.

2.5 LU decomposition (2) Properties of L and U

This is a continuation of LU decomposition, if you have no interest about linear algebra, unfortunately this is not for yours.

Last time, we saw the nice property: the result of multiplication of elimination matrices L has sign inverted components of the elimination matrices. I think many of my friends would say: ``so what?'' I think it is convenient if the answer of multiplication of matrices can be found without multiplication.

I would like to have a note first. The last blog, L = (E_{32} E_{31} E_{21})^{-1}' L has inverted signed components of Es. But this is not true for (E_{32} E_{31} E_{21}). Only true if this is inverted. Here, L^{-1} is

E_{31} is not preserved. (In this example, E_{31} has only inverted sign, but, if you solve the Gilbert Strang's book, p. 103, Problem 8, you will find out there is more behind.)

Let's come back to the property.  Intuitively, this is based on the matrix row operation property from the left side. Let's me explain this with 2 by 2 example matrix.  A 2 by 2 Elimination matrix for component 21 (E_{21}) is a lower triangler matrix shown in Equation 6. Equation 7 emphasizes the row operation, A is written in set of the rows.  If you think A is simultaneous linear equations (two equations, a_{r_1}, a_{r_2}), then, we can remove the first term of the a_{r_2} by multiplying l_{21} to the first row (a_{r_1}) and adding it to the second row (a_{r_2}). The Elimination matrix E_{21} performs this.  The inverse matrix of E_{21} is Equation 9. You can find this from the rule of 2 by 2 inverse matrix, but, you can also consider the operation's meaning. The inverse operation of this E_{21} is removing the first row with multiplied by l_{21} (= l_{21} a_{r_1}) from the first row. This is l_{21} a_{r_1} + a_{r_2} - l_{21} a_{r_1} = a_{r_2}. You can get the inverse in this way. This is a better understanding than using the 2 by 2 matrix's inverse rule. Because this idea doesn't depend on the size of matrix, 3 by 3, 4 by 4, ..., n by n, this always works. Moreover, a general problem is n by n.




According to the Elimination's property,  matrices L,U has the following structure, some of the components are always zero.
As you saw, L has non-zero component in Lower triangle part, U has non-zero component in Upper triangle part. When L's lower triangler part has coincidentally zero(s), or U's upper triangler part has coincidentally zero(s), then we have the following interesting property. (p.96 in Gilbert Strang's book)
  1.  When a row of A starts with zeros, so does that row of L.
  2.  When a column of A starts with zeros, so does that column of U.
First, 1.'s zero means the elimination is done. Because the purpose of elimination is to get zero and if it has already been zero, nothing to do, therefore, this component of elimination matrix becomes zero.  Next, 2. is related with the elimination operation performs from top to down. For example, U has one column three zeros from the top, the third row of U is A's row 3 minus a linear combination of A's row 1 and row 2. Then, if row 1 and row 2 component are zero, it's linear combination is also zero (l_{31} 0 + l_{32} 0 + 0 = 0).

At the end, we did here is only elimination. Then why we just do elimination, instead of LU decomposition. As my understand, there are two reasons. Next time finally I would like to explain why we do LU decomposition.

2010-07-04

Drei Gäste im Restaurant

Now my favirite quiz is in German (help with Isabelle, thanks).

Drei Gäste im Restaurant

Wir (ich, Isabelle, und mein Freund) waren in einem Restaurant.  Wir nahmen ein Menue. Es kostete 300 Euro. Der Kellner kassierte das Geld von uns und ging in die Kueche zurück, dann sagte der Chef: ,,Heute ist der Jahrestag des Restaurants, also kostet das Menu nur 250 Euro''. Der Kellner kam an den Tisch zurück, aber er dachte: ,,Man kann 50 Euro nicht durch drei Personen teilen''. Dann sagte er: ,,Heute ist unser Jubilaeums Tag, also bekommen Sie 30 Euro zurück''. Jeder bekam 10 Euro. Wir dankten dem Kellner und gingen nach Hause. Der Kellner fühlte sich ein bisschen schuldig, aber er entscheied es zu vergessen. Aber, er bemerkte etwas seltsam. ''Jeder bezahlt 90 Euro, also ist die Summe 270. Ich habe 20 Euro von ihnen. Die Summe ist 270 plus 20 gleich 290. Aber zuerst bekam ich 300 Euro.'' Der Kellner schrieb:

   Jeder bezahlt 90 x drei = 270
+ Ich habe                            20
------------------------------------------------------
   Summe                            290

Wo sind denn die 10 Euro?

2010-07-03

What is LU decomposition, why do I care it? (1)

Gilbert Strang's Introduction to Linear Algebra 2.5

There is a method called LU decomposition. I, a Sunday researcher, always start a kind of stupid question.
- What is the LU decomposition?
- Why do I care about that?
LU decomposition is actually almost the Elimination method. The reason we do Elimination is we want to solve the system of linear equations.  I think the most important thing is understanding the problem. If we could not get the answer, but only understand the problem, it is a bit sad for me. I hope I can have a solution also. Elimination is one of the methods to solve the system. This method adds or subtracts one equation from other equation to remove some of the variables. In a junior high school in Japan, I learned this as elimination of simultaneous equation (連立一 次方程式の消去法). Let's see an elimination example using the following matrix A.

Remove a_{21}, a_{31} of A by applying Elimination matrices E_{21},E_{31}. For example, E_{21} subtracts row 1 from row 2 of A to removes a_{21}, therefore, the component of E_{21} is -1, the rest is identity matrix. E_{32} is not known at this point.


E_{21} E_{31} A is the following, this removes a_{21} and a_{31}. You see now they are zeros.
Now we know a_{32} of A is _2_. To make this triangular matrix, E_{32} is the following. The result of Elimination is:

 This is U of LU decomposition. L is the following.

When you see the component of the equation L = (E_{32} E_{31} E_{21})^{-1}, each component shows up in the L with the sign inverted. I put the corresponding component the same color in Equation (5).

I was surprised of this correspondences. Because I know the exact result of multiplication of the matrices without multiplying them. How convenient this is. Today, I stop here and I would like to think about the reason of this.