Microsoft researcher wins prestigious Turing Prize


Leslie Lamport, a principal researcher with Microsoft Research, has been awarded the A.M. Turing Award, the highest honor in the computing industry.

According to the Association for Computing Machinery:

Leslie Lamport originated many of the key concepts of distributed and concurrent computing, including causality and logical clocks, replicated state machines, and sequential consistency. Along with others, he invented the notion of Byzantine failure and algorithms for reaching agreement despite such failures.

Leslie LamportA graduate of the legendary Bronx High School of Science, Lamport went on to earn a B.S. and Ph.D. in Mathematics at MIT and Brandeis, respectively. In 1978, he wrote a paper — Time, Clocks and the ordering of Events in a Distributed System, that is viewed as the basis for much subsequent work in distributed systems.

Lamport, who works out of Microsoft Research’s Silicon Valley office, joined the company in 2001 after stints at SRI International and Digital…

View original post 13 more words

k-means Clustering

The k-means algorithm, a straightforward and widely used clustering algorithm. Given a set of objects (records), the goal of clustering or segmentation is to divide these objects into groups or “clusters” such that objects within a group tend to be more similar to one another as compared to objects belonging to different groups. In other words, clustering algorithms place similar points in the same cluster while placing dissimilar points in different clusters.Note that,in contrast to supervised tasks such as regression or classification where there is a notion of a target value or class label, the objects that form the inputs to a clustering procedure do not come with an associated target. Therefore, clustering is often referred to as unsupervised learning. Because there is no need for labeled data, unsupervised algorithms are suitable for many applications where labeled data is difficult to obtain. Unsupervised tasks such as clustering are also often used to explore and characterize the dataset before running a supervised learning task. Since clustering makes no use of class labels, some notion of similarity must be defined based on the attributes of the objects. The definition of similarity and the method in which points are clustered differ based on the clustering algorithm being applied. Thus, different clustering algorithms are suited to different types of datasets and different purposes. The “best” clustering algorithm to use therefore depends on the application. It is not uncommon to try several different algorithms and choose depending on which is the most useful.

   1: Input: Dataset D, number clusters k

   2: Output: Set of cluster representatives C, cluster membership vector m

   3:     /* Initialize cluster representatives C */

   4:     Randomly choose k data points from D

   5: 5: Use these k points as initial set of cluster representatives C

   6:     repeat

   7:         /* Data Assignment */

   8:         Reassign points in D to closest cluster mean

   9:         Update m such that mi is cluster ID of ith point in D

  10: 10: /* Relocation of means */

  11:         Update C such that cj is mean of points in jth cluster

  12: until convergence of objective function summation(i=1 to N)(argminj||xi −cj||2 2)