After (or sometimes before) lectures, we will write a blurb on what we did and provide references
to where the material is from. Sometimes we may provide pdfs of rough notes. Warning: the notes
are really rough in that we write them in one pass and don't necessarily go over them again.
So they may contain mistakes. If you find one, please let us know and we address them soon.

The 1.5factor approximation algorithm for TSP is due to Christofides from 1976. It is an outstanding
open problem to improve this factor.
The (log n)factor approximation algorithm for ATSP is due to
Frieze, Galbiati and Maffioli from 1982.
This was improved in 2011 by Asadpour et al. in
this paper.
We may cover this later in the course. A much
newer awesome result is due to Anari and Oveis Gharan.

The greedy algorithm for set cover problem has been discovered by many (Johnson, Lovasz, Chvatal, mainly from late seventies). The submodular function maximization is due to
Nemhauser, Wolsey and Fisher from 1978. The submodular set cover was studied by Wolsey in 1982. The unconstrained (notnecessarily monotone) submodular function maximization algorithm
alluded to in class is from this millenium  a FOCS 2012 paper by Buchbinder, Feldman, Naor and Schwartz.

Our presentation is mainly drawn from Sanjeev Arora's excellent
survey on approximation algorithms for geometric problems. The current best running time for a Euclidean TSP PTAS is due to
Bartal and Gottlieb from 2013. Another related impressive paper is
Klein's PTAS for TSP on planar graphs.

Lecture 4 (Jan 30): Linear Programming Relaxations, Facility Location, GAP. (rough notes)
The facility location algorithm covered in class is due to Shmoys, Tardos, and Aardal in 1998. This was the first constant factor approximation. The current best factor is
1.488 by Shi Li in
this paper. The approximation algorithm for GAP is due to Shmoys and Tardos from 1993.

Iterated rounding in approximation algorithms was introduced by Kamal Jain in his 1998 paper on survivable network design problem which is explored in the problem set 5.
The proof done in class is due to Nagarajan, Ravi, and Singh from 2010. There is whole book written on Iterative methods written by Lau, Ravi, and Singh which has a lot of
applications.

We looked at the use of randomization in approximation algorithms. The use of randomness often makes algorithms easier both to design and analyze. Often such randomized algorithms can be
"derandomized", that is, one can get deterministic algorithms which achieve the same approximation factor, but this is not known to be true in general.
(Whether randomness strictly helps in computation is one of the striking questions in computer science.)
The algorithm for minimum congestion routing is due to Raghavan and Thomson from 1985 and is one of the first uses of the Chernoff bounds in algorithm design. There has been PLENTY
of work following it, and this blurb doesn't suffice to summarize it. The algorithm for GAP is due to Fleischer, Goemans, Mirrokni, and Sviridenko from 2006.
A good reference for concentration inequalities is a book (draft
here ) by Dubhashi and Panconesi.

We got a glimpse into the world of sublinear algorithms for approximation problems. I had planned to talk about vertex cover and set cover but only got through vertex cover due to discussions. I summarize and clarify that discussion at the end of the posted notes. These topics are covered excellently in Krzysztof Onak's
Ph.D. thesis. More
recent work by Onak et. al. improves the query complexity to essentially optimal.

We looked at the dual of an LP. Then we described Dual fitting and the Primal Dual methodology on Set cover. Then we talked a bit in how the dual can be used to solve LPs with exponentially many variables and polynomially many constraints
(like the configuration LP from last class). Finally, we looked at how primaldual, dual fitting and randomized rounding come together to solve the online set cover problem.

We looked at three problems  the minimum s,t cut, the multiway cut, and the multicut problem. The algorithms done in class are due to Calinescu, Karloff and Rabani from 1998. The current best approximation algorithm for the multiway cut problem is (as of 2015 March) 1.2965, and is due to
Sharma and Vondrak.

We spent most of our time with the uniform sparsest cut and gave a O(log n) approximation algorithm. The result is due to Leighton and Rao from 1988, but the journal version from 1999
is a beautiful read, and has many generalizations in it. In particular, it gives O(log n) approximations for what is called the minimum conductance cut. The conductance of a cut is defined
as the cap(S)/E[S]*E[bar(S)] rather than cap(S)/Sbar(S).
The general sparsest cut is via metric embeddings and is due to London, Linial, and Rabinovich from 1995. The metric embedding result itself is due to Bourgain. This we didn't have time
to cover and may do so next class.

Lecture 11a (Mar 20th): Bourgain's theorem via padded decomposition. (rough notes)

We introduced semidefinite programming, a vector relaxation of integer programs. Goemans and Williamson's MAXCUT algorithm that we saw in class is randomized. The derandomization is nontrivial and is due to Hariharan and Mahajan. An instance with tight integrality gap was given by Feige and Schechtman; the analysis is a piece of gem.

We discussed Karger, Motwani and Sudan's
algorithm for coloring 3colorable graphs. The current best in this line of work is that of
Kawarabayashi and Thorup (2014) who achieve O(n^{1.99996}) colors; this paper uses combinatorics to fight the hard case for the previous best SDPbased algorithm. In the second part of the class, we talked about an approach pioneered by Prasad Raghavendra that may turn out to be the "ultimate" rounding method for a wide class of semidefinite relaxations. We discussed it for the MAXCUT problem. Our presentation is based on the treatment in
Gärtner and Matoušek's book.

In Lecture 11b, we'd already given an SDP relaxation for Max2SAT. Here, we saw how to make it tighter by adding "triangle inequality constraints". Then, we talked about the socalled canonical SDP relaxation for any MaxkCSP, which involves scalar variables that give a probability distribution on the local assignment to each clause and vector variables that give a joint distribution on the global assignment to all the variables, as well as "first moment" and "second moment" consistency constraints among the two sets of variables. We also briefly discussed how the SDP can be rounded using the miniaturization method introduced in the last lecture.

We shift to showing hardness of approximation. For standard decision problems, we have the tool of polynomial time reductions and NPcompleteness. For optimization problems, we introduced the notion of "gap problems" which are decision problems with a promise (
here's a great survey on problems with promised instances). We looked at reductions from standard NPcomplete problems and also approximationpreserving reductions from gap problems. Finally, we introduced PCP's.

Hastad's theorem on hardness of GapE3Lin mentioned in class is from
this landmark paper. Hastad also showed that [1,7/8+ε]GapMax3SAT is NPhard. We saw in class the weaker fact, through a reduction from GapE3Lin, that it is NPhard to approximate Max3SAT to better than a 7/8 factor. We also introduced the Label Cover problem, discussed its NPhardness, and saw a reduction from Label Cover to Set Cover. Our presentation was based on Chapter 16 in WilliamsonShmoys.

The definition of unique games is due to a foresightful
work of Subhash Khot. This
survey of his is a very nice overview of the unique games conjecture and the progress so far. We saw a direct reduction from unique games to the multicut problem. We then discussed at a very high level the "long code"based approach to constructing PCP's. A detailed discussion is out of scope of this course, but there are some great lecture notes available online:
here and
here.