Friday 7 November 2014

Big-O

In the lecture after the second term test, we looked at proofs of big-O. Proving the big-O of a function based on the formal mathematical definition of big-O firstly requires picking a c value higher than the constant factor of the highest order term and picking a B value. For basic straightforward functions, nothing  too complex needs to be done to finish up the proof. On the other hand, for proving the big-O of functions based on more complicated functions than just basic polynomials, we need to go through the processes of upper bounding one function and lower bounding the original function and then choosing an appropriate c value to relate the two functions. All of this in order to get a safe estimate for what the big-O of the original function would be.
Like proofs we have done before, you can also prove the big-O of a function by negation. The same logic of bounding the B and c values apply in all big-O proofs. For polynomials figuring out the big-O of a function in pretty easy as it depends just on the highest degree term of the function. For non polynomial functions, to prove the big-O of the function, the limit needs to be taken. One needs to connect the definition of a limit to that of big-O. This is done in three steps:
1. Use L'Hopital's rule to show that the limit of the ratio of a function by a function that is its big-O goes to infinity as the variable(n) of the function goes to infinity.
2.  The limit determined in step one is converted to the formal definition of a limit.
3.  That is then related to the formal definition of big-O.  

Sunday 26 October 2014

Proofs go Poof

This week's topics included a conclusion of proofs and the beginning of algorithms.
Proof by cases, the last method of proofs we learnt, involves  breaking the problem down into different cases or situations and then solving for each case, ultimately bringing the problem back circle and joining up the conclusions. We wrapped up proofs with a summary of introduction and elimination rules for the basic proof conditions.

Then we started on the more...computer science side of things. Chapter 4, "Algorithm Analysis and Asymptotic Notation", which is basically about the classification of algorithms by their big O notation. It is about how efficiently an algorithm(a set of problem solving steps) works. We started off with bubble and merge sort, comparing the difference between their run times given the same set of data for both sorts. In computer science the point to be noted is that run time isn't actually about the seconds that the program takes to complete its job. Rather, it is about the number of steps that the program has to run and how that number changes with respect to the actual code of the algorithm and the kind of input or data it has to work with. This makes it independent of the hardware it is being run on.

Another thing to note is that constants do not matter in determining run time. 2n and 2000n still have the same linear runtime. Furthermore, when it comes to large data sets, the lower order terms become irrelevant for the run time. Something like n2 + 2n + 3, is pretty much the same as n2 because the 2n + 3 is tiny in size compared to the n2 which makes it have little no effect on the run time.

The asymptotic notation to depict run times is O(f(n)), whereas the exact form is actually the function f(n ) in its entirety. This brought us to the end of the class with a brief look on worst, average, and best case scenarios for an algorithm, with worst case meaning under what data conditions does the algorithm have the longest run time and best case meaning under what data conditions it has the best run time. 

Saturday 18 October 2014

Marks and Proofs

Getting our tests back so quickly came as a bit of a shock this week. My mark was gave me a bigger shock.  Messing up on a topic you thought you understood(which in my case was the order of the quantifiers and how the meaning of the statement changes depending on the order)  is a bit depressing.

Sad test marks aside, the shorter than usual lecture this week covered actually doing proofs rather than just their structures. We started off with non boolean function proofs and a relatively new concept of the 'floor' of a function. I understood the proofs when they were done in class, but my problem is actually doing them by myself. Understanding something that's already there is much easier than actually creating something based on the understanding you already have of it. So the "takeaway" slides at the end of each of each section of the lecture helped me get a better viewpoint on things such as working with expressions to make them look more like the ones in the question itself.

Then came disproving a statement,  which is basically the same as proving its negation. What I found a bit tricky here was visualizing the examples because not all functions or statements are as straightforward or easy to visualize as the ones we do in lecture.
Lastly, we did epsilon-delta proofs of limits. Again this is something we're doing in calculus too, so its just a tiny bit easier to understand. But the backward search way of finding the delta is something that I still have to work on.

All in all I think proofs are something that give a lot of people trouble but hopefully we all will pull through this and survive. 

Sunday 12 October 2014

Tried and Tested


One test done! And it wasn't even that bad! Looking at the tests for the past years had caused me a bit of worry seeing as it had things that we had not covered in class yet, but turns out there was no need to worry at all. Everything on the test was related to topics that I was sure were going to be on the it and there were no extra complicated or trick questions(unless there were and I didn't catch them).  

In the lecture that we (sadly) had after the test, we talked about 4 ways of doing a proof:
  • proof using contrapositive - should be used when the contrapositive is easier to prove than the statement itself.
  • proof using contradiction - prove the opposite of the statement right.
  • proof for existence - find just one example that proves the statement.
  • proof about a sequence - break the sequence down into parts based on the quantifiers in the statement.

One thing to remember(which I think I will forget) is to write those comments to explain what is going on in your proof. It's pretty much the same as commenting your code.

Proofs are something that I take time to understand completely. The tutorial proofs and quiz helped a lot to understand the structure by breaking it down, but the process is still not something that I am fully confident about. Hopefully, next week when we do more proofs, it will all become clearer. 

Friday 3 October 2014

Problem solving episode

So at the end of the last lecture we worked on figuring out if folding a piece of paper over and over again in the same way produces a pattern that is mathematically predictable or not. This is how I broke the problem down according to Poyla's method of problem solving.

1.  We know that we must take a strip of paper and fold its left end over to its right end several times to get a creased pattern on the strip when it is opened up. Each crease thus created is either pointing vertex up or vertex down.  The goal of the problem is to determine whether one can logically predict which way the creases will face(up or down) regardless of the number of times the paper is folded over.

2. The plan is to first fold the paper different amount of times and note the pattern of creases created for different amount of folds. Then identify the similarities and differences in a series of patterns in order to predict the direction of the creases after each fold.

3. Folding the paper once gives:                                    v
Folding it twice gives:                                                   ^ v v
Folding it thrice gives:                                            ^ ^ v v ^ v v
Folding it four times gives:                          ^ ^ v ^ ^ v v v ^ ^ v v ^ v v
and so on.

4. Looking at the pattern created after each fold we can see how it follows from the previous fold. For example, after the third fold, the paper has the same creases as after the second fold, and then some more. To make it clearer we depict it like this:







v










^



v



v




^

^

v

v

^

v

v

^
^
v
^
^
v
v
v
^
^
v
v
^
v
v

After putting it like this one can see that once a crease is made after a fold, it is (linearly) preceded by a vertex up crease and followed by a vertex down crease in the next level of folding. Also, only a new crease (i.e. a crease that was not existent on the previous level of folding) is preceded and followed in that manner. An old crease(one that was created before the previous level of folding) will just stay the same with no additional creases around it except for those that are created as a result of the other new creases.

5. You will get stuck in solving this problem if you jump straight into it with no plan of action. It takes a while to wrap your brain around the fact that the creases are not in fact random and do depend on the way you fold them, but once you try folding the paper a few different amount of times, you will see the pattern. Writing out what you see really helps in understanding what's happening. 

Friday 26 September 2014

Take Two

The main topic we covered this week was that of Conjunctions and Disjunctions; what they are and their negations.
I'm sure that I was not the only one in class who saw these as pretty much the math versions of  'AND' and 'OR' logic gates. The concept was simple enough to grasp but what I still have a problem in is translating from English to symbols. Like one of the questions in our preparation for this week's tutorial is "No course is a prerequisite for itself."(1.d.) While doing the earlier parts of the same question it was easy enough to have one x and one y and either existentially or universally quantify them. This part left me stumped because how do I quantify two courses when I am only talking about only one course? I had similar problems with the rest of the parts of question 1.
Thankfully, all was sorted out during tutorial when I found out that we can in fact, use multiple variables or even the same ones.
So, looking back at what we've done this week, it's hard for me to find something that I found exceptionally difficult. I like to think that I'm a very logical person in general  => math logic comes easily to me. That is the case most of the time except for when someone in my tutorial starts asking questions about how Q only if P sounds more like Q iff P when we say it in English and I am pretty sure that's when my mind went from "Yeah, I got this" to "What?". I saw his point, but I also (soon) saw the problem with the point. Q will happen if P happens, but P is not the ONLY cause for Q. It got us thinking and I'm sure we all benefited from it.

And thus another week went by and the thing that I worried about the most was still this SLOG. 

Thursday 18 September 2014

The Not-So-Bad Beginning

When I first realized that I was going to have to write a BLOG for a COMPUTER SCIENCE course I was, mildly put, surprised. Turns out this CSC course has more Math and English than Python in it(so far).
In the very beginning it was the symbols of the mathematical terms in this class that made me want to turn around and walk back out of the class. But it so happens that the mathematical terms like universal and existential quantifiers, sets, subsets, unions, and intersections, that we have learnt in the past two weeks have been essentially the same as the ones I have learnt in MAT137 in the same span of time.  This made it much easier for my brain to not freeze every time it saw an inverted A or a backward E on the screen. What took longer to grasp was the concept of Quantifier claims translated in English, like "P only if/only when Q" and "Not P unless/if not Q".  Determining the 'P' and the 'Q' in situations like "Don't knock on it unless you have tried it" or "I will go only if you insist", was something that escaped my understanding until I sat back, blocked out everything else, and clearly thought them out. If I just look at them and try to identify the parts, I am more than likely to get it all wrong. But mulling over them, REALLY thinking about what leads to what, and going through all the options in my mind helps me develop a clear train of thought as to what the 'P' and 'Q' of the statement are.

Apart from the overcoming the challenge of making my mind think in the mathematical way, the thing that I was most worried about was this slog itself. The importance of communication is often undermined in technical fields such as Computer Science, but it is necessary  to understand the value of assignments like this one for the sake of communication. While I may worry about my slog turning into a complete disaster, I hope I can do enough with it to help myself and my classmates in the future, and have a little fun on the way too.