Spring 2018 / DM843/DM856
Unsupervised Learning

General Information

One trend can be observed over almost all fields of informatics: we have to cope with an ever-increasing amount of available data of all kinds. This amount of data renders it impossible to inspect the dataset "by hand", or even deduce knowledge from the given data, without sophisticated computer aided help. In this course we will discuss one of the most common mechanism of unsupervised machine learning for investigating datasets: Clustering. Clustering separates a given dataset into groups of similar objects, the clusters, and thus allows for a better understanding of the data and their structure. We discuss a number of clustering methods and their application to various different fields such as biology, economics or sociology.


# Date Content Slides Comments
1 Mon, 05.02.2018 Introduction here
2 Fri, 09.02.2018 Mathematical Foundations here
3 Mon, 12.02.2018 Detecting Clusters Graphically here
4 Fri, 16.02.2018 PCA here
5 Mon, 19.02.2018 Proximity Measures here
6 Fri, 23.02.2018 Hierarchical Clustering here
7 Mon, 26.02.2018 Optimization Based Clustering here
8 Fri, 02.03.2018 Cancelled
9 Mon, 05.03.2018 Gaussian Mixture Models here
10 Fri, 09.03.2018 Cluster Analysis here
11 Mon, 12.03.2018 Advanced Clustering here
12 Fri, 16.03.2018 Student Talks
13 Mon, 19.03.2018 Student Talks
14 Fri, 23.03.2018 Student Talks

Procedure of the oral exam

The exam will last about 15-20 minutes. At the beginning, one topic from the list below will be drawn randomly. For each topic the examinee should be prepared to make a short presentation of about 5 minutes. It is allowed to bring one page of hand-written notes (DIN A4 or US-Letter, one-sided) for each of the topics. The examinee will have 2 minutes to briefly study the notes for the drawn topic before the presentation. The notes may be consulted during the presentation if needed but it will negatively influence the evaluation of the examinee's performance. During the presentation, only the blackboard can be used (you cannot use overhead transparencies, for instance).

After the short presentation, additional question about the presentation's topic but also about other topics in the curriculum will be asked.

Below is the list of possible topics and some suggested content. The listed content are only suggestions and is not necessarily complete nor must everything be covered in the short presentation. It is the responsibility of the examinee to gather and select among all relevant information for each topic from the course material. On the course website you can find suggested readings for each of these topics.

Topics for the Oral Exam:

  1. Graphical Analysis of Clusters
    • Histograms, Scatter Plots
    • Density Estimation (Parzen Windows, Kernel vs. kNN)
    • Principal Component Analysis
    • Principal Coordinate Analysis
  2. Proximity Measures
    • Different datatypes
    • Common measures for various data types
    • Metrics
    • Similarities for structural data
  3. Hierarchical Clustering
    • Function principles / linking functions
    • Dendrograms
    • Comparison to crisp clusterings
    • Function principle of BIRCH
  4. Optimization Based Clustering
    • Dissection of the co-variance matrix (W and B)
    • Cluster criteria
    • K-means
    • GAP statistic
  5. Cluster Analysis
    • Steps of a cluster analysis
    • Data preprocessing
    • Cluster Validation (internal vs. external)
  6. Expectation Maximization
    • Function principle
    • Gaussian Mixture Models
    • Maximum Likelihood Estimators
    • Similarity to k-means
  7. Advanced Clustering
    • Subspace clustering
    • Ensemble clustering
    • Co-clustering

Student Talks

In order to limit the number of presentations, you should form teams of three students. You will receive one or two scientific papers about the clustering tool assigned to you. You are supposed to create a small presentation which should last no longar than 15 minutes. It is important that you stay within this time frame! In your small talk, you should cover the following aspects (if possible):

  • Present the underlying idea and the algorithm of the tool
  • How was the tool evaluated (e.g., on what datasets, compared against what other tools, formal proofs of certain properties, etc.)
  • What is your opinion about the pros and the cons of the algorithm

The paper is only the starting point and you should use this as the basis for your presentation, but feel free to dig-up any other sources.

The slides used during the class: here.

The talks will be on the 16th, 19th and 23rd of March. Please take into account, that this talk is mandatory to be eligible for the exam. In case you have not yet registered for a talk, please write me an email and I will assign you a topic. If you have not been in class and haven't slecetd a topic yet, please choose one of the list below, on first-come first-serve basis:

  • clarans
  • clusterdp
  • coolcat
  • cure
  • denclue
  • limbo
  • Spatial Partition Clustering

List of so far assigned teams and topics:

# Date (tentative) Topic Team Papers
1 16.03.2016 Markov Clustering Magnus, Katrine E., Katrine B.\ paper
2 16.03.2016 clusterOne Kristina C., Kamilla paper
3 16.03.2016 Spectral Clustering Katharina, Iustinian, Lennart paper
4 16.03.2016 fuzzy c-means Aleksandrs, Juan, Anders paper 1
paper 2
5 16.03.2016
6 19.03.2016 RNSC Katharina Flugt, Magdalena, Maja, Dea paper
7 19.03.2016 Transitivity Clustering Nima, Matthew Force
8 19.03.2016 Self-Organizing Maps Martin, Simon, Valentina Introduction
Paper 1
Paper 2
SOMs in R
9 19.03.2016 DBSCAN Jonatan, Mathias DBSCAN
10 19.03.2016
11 23.03.2016 densitycut Klaus, Nicolai paper
12 23.03.2016 Affinity Propagation Simon Y., Thomas paper
13 23.03.2016
14 23.03.2016
15 23.03.2016


Project Material

  • Please find the project desciption here. If there is anything unclear or you think I didn't have captured the protocol as we discussed, please let me know. If you have any further questions, don't hesitate to write me an email or stop by at my office.
  • Protein sequences here. And the partial gold standard here.
  • A very short description how to perform a BLAST all-vs.-all run: https://www.biostars.org/p/6541/ Don't forget to set the E-value cut-off to 100.
  • You can download BLAST from the NCBI website: here. There you find the source code as well as precompiled versions for all usual operating systems.

Some Remarks regarding BLAST

If you run blastp including the parameter -outfmt 6 (as stated in the little tutorial) you will receive a file with 12 columns. The meaning of them is for example described here.

If you look at the link, you will see that you have to use the 11th column (the second last) in oder to get an E-value. With the -log(E-value) conversion, we are calculating a similarities. Now it might happen that some E-values are zero, that means the proteins are extremely similar and thus should also receive a very high similarity. Nevertheless, -log(0) equals infinity which is technically very hihg but not really practical. This can be repaired, e.g., by one of the following ways:

  • add some small pseudo count to each similarity (e.g., the smallest real float or double value > 0 before you calculate the log)
  • set all zeros to the lowest otherwise observed value >0.

With that you will receive a valid similarity matrix. If your tool requires distances, you need to convert these similarities to distances in a second preprocessing step.

Summary of the Project

In this project, we aim to separate a set of proteins into their protein families (it is not of utmost importance to know what exactly a protein family is). Without any prior biological knowledge, it is very hard to identify suitable parameters in order to identify protein families. Luckily, we have a partial gold-standard dataset consisting of 27 families, all belonging to one superfamily. The overall dataset consists of several superfamilies with an unknown number of families.

After selecting three clustering methods, we will use the gold standard in order to train the parameter sets. The best clustering is determined by using the discussed evaluation protocol of last class. These parameters are used in order to cluster the entire dataset. The results of clustering the entire dataset are also evaluated according to the protocol defined in class. At the end, you are supposed to summarize all your findings in a protocol, convincing me to favor one tool over another.

Reading Material on Cluster Evaluation

One ot the most important tasks of your project is to evaluate your clustering in order to conclude the best apporach to the problem. You will find (besides the lecture slides) some additional recommended readings on cluster evaluation.

  • Rendón, E. et al. (2011). Internal versus External cluster validation indexes. International Journal of computers and communications, 5(1), 27-34. download
  • Handl, J. et al. (2005). Computational cluster validation in post-genomic data analysis. Bioinformatics, 21(15), 3201-3212. download
  • (Optional) Halkidi, M. et al. (2001). On clustering validation techniques. Journal of Intelligent Information Systems, 17(2-3), 107-145. download
  • (Optional) Xu, Rui, et al. Survey of clustering algorithms. Neural Networks, IEEE Transactions on, 2005, 16. Jg., Nr. 3, S. 645-678. download

Available Clustering Tools

As already announced in the lecture, you have to choose three of the tools listed below. If available, you can download the corresponding paper (either the original publication, or a good overview article) of the tool/method. This is only a suggsetion. You can also use any of the tools discussed by your fellow students during the presentaitons or use any other clustering tool you deem appropriate.

Details & Deadline

You are allowed to work in teams of up to 4 students. You will receive feedback on your report and your chosen strategy. Be aware that questions regarding the project might be part of the oral exam. Do not hesitate to contact me in case you have any additional questions.

Upload your report not later than Sunday, 15th of April, 23:59:59 Danish Time


  • All lecture slides are relevant for the exams.
  • All readings noted in the lecture list are relevant for the exam.
  • Brian S. Everitt, Sabine Landau, Morven Leese, Daniel Stahl, Cluster Analysis, 5th Edition, ISBN: 9780470749913
  • A good introduction to R: here
  • R Cheat-Sheets: here