**By Jose Daniel Berba, Machine Studying Researcher at Pondering Machines**.

HDBSCAN is a clustering algorithm developed by Campello, Moulavi, and Sander [8], and stands for “*Hierarchical Density-Based mostly Spatial Clustering of Functions with Noise.”*

On this weblog put up, I’ll current in a top-down strategy the important thing ideas to assist perceive how and why HDBSCAN works. That is meant to enhance current documentation, akin to sklearn’s “How HDBSCAN works” [1], and different works and displays by McInnes and Healy [2], [3].

### No (few) assumptions aside from some noise

Let’s begin on the very prime. Earlier than we even describe our clustering algorithm, we should always ask,* “what type of data are we trying to cluster?”*

We wish to have as few assumptions about our knowledge as potential. Maybe the one assumptions that we will safely make are:

- There may be noise in our knowledge
- There are clusters in our knowledge which we hope to find

**Clustering knowledge set**

To inspire our dialogue, we begin with the data set utilized in [1] and [3].

With solely 2 dimensions, we will plot the info and determine 6 “natural” clusters in our dataset. We hope to mechanically determine these by means of some clustering algorithm.

**Ok-means vs. HDBSCAN**

Understanding the anticipated variety of clusters, we run the classical *Ok-means* algorithm and evaluate the ensuing labels with these obtained utilizing *HDBSCAN.*

Even when supplied with the proper variety of clusters, Ok-means clearly fails to group the info into helpful clusters. HDBSCAN, alternatively, offers us the anticipated clustering.

**Why does Ok-means fail?**

Briefly, **Ok-means performs poorly as a result of the underlying assumptions on the form of the clusters are usually not met**; it’s a parametric algorithm parametrized by the *Ok cluster centroids, *the facilities of Gaussian spheres. Ok-means performs finest when clusters are:

- “round” or spherical
- equally sized
- equally dense
- most dense within the heart of the sphere
- not contaminated by noise/outliers

Allow us to borrow an easier instance from ESLR [4] as an example how Ok-means will be delicate to the form of the clusters. Beneath are two clusterings from the identical knowledge. On the left, knowledge was standardized earlier than clustering. With out standardization, we get a “wrong” clustering.

*Determine 14.5 from ESLR chapter 14 [4]. Clusters from standardized knowledge (left) vs clusters from uncooked knowledge (proper).*

**What are the traits of our knowledge?**

We return to our unique knowledge set, and by merely describing it, it turns into apparent why Ok-means has a tough time. The info set has:

- Clusters with arbitrary shapes
- Clusters of various sizes
- Clusters with completely different densities
- Some noise and possibly some outliers

**Want robustness for knowledge exploration**

Whereas every bullet level will be moderately anticipated from a real-world dataset, every one will be problematic for parametric algorithms akin to Ok-means. We would wish to test if the assumptions of our algorithms are met earlier than trusting their output. However, checking for these assumptions will be tough when little is understood concerning the knowledge. That is unlucky as a result of **one of many main makes use of of clustering algorithms is knowledge exploration **the place we’re nonetheless within the strategy of understanding the info

Due to this fact, a clustering algorithm that can be used for knowledge exploration must have as few assumptions as potential in order that the preliminary insights we get are “useful”; having fewer assumptions make it extra strong and relevant to a wider vary of real-world knowledge.

### Dense areas and multivariate modes

Now, we’ve an thought what kind of knowledge we’re coping with, let’s discover the core concepts of HDBSCAN and the way it excels even when the info has:

- Arbitrarily formed clusters
- Clusters with completely different sizes and densities
- Noise

HDBSCAN makes use of a density-based strategy, which makes few implicit assumptions concerning the clusters. It’s a non-parametric technique that appears for a cluster hierarchy formed by the multivariate modes of the underlying distribution. Moderately than on the lookout for clusters with a specific form, it appears to be like for areas of the info which are denser than the encompassing area. The psychological picture you should use is attempting to separate the islands from the ocean or mountains from its valleys.

**What’s a cluster?**

How can we outline a “cluster”? The traits of what we intuitively assume as a cluster will be poorly outlined and are sometimes context-specific. (See Christian Hennig’s speak [5] for an summary)

If we return to the unique knowledge set, the explanation we determine clusters is that we see 6 dense areas surrounded by sparse and noisy area.

*Encircled areas are extremely dense.*

A method of defining a cluster that’s normally in line with our intuitive notion of clusters is* extremely dense areas separated by sparse areas.*

Take a look at the plot of 1-d simulated knowledge. We will see 3 clusters.

**Wanting on the underlying distribution**

*X *is simulated knowledge from a mix of regular distributions, and we will plot the precise likelihood distribution of X.

*Peaks = Dense areas. Troughs = sparse areas.*

The peaks correspond to the densest areas, and the troughs correspond to the sparse areas. This offers us one other method of framing the issue assuming we all know the underlying distribution, the place *clusters are extremely possible areas separated by inconceivable areas.* Think about the higher-dimensional likelihood distributions forming a panorama of *mountains and valleys, *the place the *mountains are your clusters*.

*Coloring the 3 peaks/mountains/clusters.*

For these not as acquainted, the 2 statements are virtually the identical:

*extremely dense areas separated by sparse areas**extremely possible areas separated by inconceivable areas*

One describes the info by means of its likelihood distribution and the opposite by means of a random pattern from that distribution.

The PDF plot and the strip plot above are equal. PDF, *likelihood density operate, *is interpreted because the likelihood at that time, or a small area round it, and when taking a look at a pattern from *X*, it will also be interpreted because the anticipated density round that time.

Given the underlying distribution, we count on that areas which are extra possible would are likely to have extra factors (denser) in a random pattern. Equally, given a random pattern, you may make inferences on the likelihood of a area based mostly on the empirical density.

**Denser areas within the random pattern correspond to extra possible areas within the underlying distributions.**

In reality, if we have a look at the histogram of a random pattern of X, we see that it appears to be like precisely just like the true distribution of X. The histogram is typically known as the *empirical likelihood distribution, *and with sufficient knowledge, we count on the histogram to converge to the true underlying distribution.

Once more, density = likelihood. Denser = extra possible.

**However… what’s a cluster?**

Sadly, even with our “*mountains and valleys” *definition of clusters, it may be tough to know whether or not or not one thing is a single cluster. Take a look at the instance beneath the place we shifted one of many modes of X to the suitable. Though we nonetheless have 3 peaks, do we’ve 3 clusters? In some contexts, we’d think about 3 clusters. “Intuitively,” we are saying there are simply 2 clusters. How can we resolve?

By wanting on the strip plot of *X’, *we is usually a bit extra sure that there are simply 2 clusters.

*X* has 3 clusters, and *X’* has 2 clusters. At what level does the variety of clusters change?

One approach to outline that is to set some international threshold for the PDF of the underlying distribution. The linked elements from the ensuing level-sets are your clusters [3]. That is what the algorithm DBSCAN does, and doing at a number of ranges would end in DeBaCl [7].

*Two completely different clusterings based mostly on two completely different level-sets.*

This may be interesting due to its simplicity, however don’t be fooled! We find yourself with an additional hyperparameter, *the edge **𝜆**, *which we’d should fine-tune. Furthermore, this doesn’t work effectively for clusters with completely different densities.

To assist us select, we shade our cluster decisions, as proven within the illustration beneath. Ought to we think about *blue *and *yellow, *or *inexperienced solely?*

*3 clusters on the left vs. 2 clusters on the suitable.*

To decide on, we have a look at which one “persists” extra. Can we see them extra collectively or aside? We will quantify this utilizing the realm of the coloured areas.

On the left, we see that the sum of the areas of the *blue *and* yellow *areas is larger than the realm of the *inexperienced *area*. *Which means that the 2 peaks are extra distinguished, so we resolve that they’re two separate clusters.

On the suitable, we see that the realm of inexperienced is far bigger. Which means that they’re simply “bumps” fairly than peaks. So we are saying that they’re only one cluster.

Within the literature [2], the realm of the areas is the measure of *persistence*, and the strategy is known as *eom* or *extra of mass*. A bit extra formally, *we maximize the whole sum of persistence of the clusters below the constraint that the chosen clusters are non-overlapping.*

**Establishing the hierarchy**

By getting a number of level-sets at completely different values of *𝜆*, we get a hierarchy. For a multidimensional setting, think about the clusters are islands in the course of the ocean. As you decrease the ocean degree, the islands will begin to “grow,” and ultimately, islands will begin to join.

To have the ability to seize and characterize these relationships between clusters (islands), we characterize it as *a hierarchy tree*. This illustration generalizes to larger dimensions and is a pure abstraction that’s simpler to characterize as a knowledge construction that we will traverse and manipulate.

*Visualizing the cluster hierarchies as a tree.*

By conference, timber are drawn top-down, the place the basis (the node the place all the pieces is only one cluster) is on the prime, and the tree grows downward.

*Visualizing the tree top-down.*

In case you are utilizing the HDBSCAN library, you may use the *clusterer.condensed_tree_.plot()* API. The results of this, proven beneath, is equal to the one proven above. The encircled nodes correspond to the chosen clusters, that are the *yellow, blue, *and* purple* areas, respectively.

*Condensed tree plot from HDBSCAN.*

When utilizing HDBSCAN, this specific plot could also be helpful for assessing the standard of your clusters and will help with fine-tuning the hyper-parameters, as we are going to talk about within the *“Parameter Selection”* part.

### Domestically Approximating Density

Within the earlier part, we had entry to the true PDF of the underlying distribution. Nevertheless, **the underlying distribution is nearly at all times unknown for real-world knowledge.**

Due to this fact, we’ve to estimate the PDF utilizing the empirical density. We already mentioned a method of doing this, utilizing a histogram. Nevertheless, that is solely helpful for one-dimensional knowledge and turns into computationally intractable as we improve the variety of dimensions.

We’d like different methods to get the empirical PDF. Listed below are two methods:

- Counting the variety of neighbors of a specific level inside its 𝜀-radius
- Discovering the space to the Ok-th nearest neighbor (which is what HDBSCAN makes use of)

**Depend Neighbors inside ****𝜀****-radius**

For every level, we draw a 𝜀-radius hypersphere across the level and depend the variety of factors inside it. That is our native approximation of the density at that time in area.

*Estimation of pdf utilizing neighbor counts.*

We do that for each level, and we evaluate the estimated PDF with the true worth of the PDF (which we solely do now as a result of we simulated the info and its distribution is one thing we outlined).

For our 1-dimensional simulated knowledge, the neighbor depend is extremely correlated with the true worth of the PDF. The upper the variety of neighbors ends in the next estimated PDF.

*Estimating the PDF of X utilizing neighbor counts eps = zero.1.*

We see that this technique ends in good estimates of the PDF for our simulated knowledge X. Word that this may be delicate to the dimensions of the info and the pattern dimension. You may have to iterate over a number of values of 𝜀 to get good outcomes.

**Distance to the Ok-th nearest neighbor**

On this one, we get the complement of the earlier strategy. As an alternative of setting 𝜀, then counting the neighbors, we decide the variety of neighbors we wish and discover the smallest worth of 𝜀 that might include these Ok neighbors.

*Core distances for Ok = 7.*

The outcomes are what we name *core distances *in HDBSCAN. Factors with smaller core distances are in denser areas and would have a excessive estimate for the PDF. Factors with bigger core distances are in sparser areas as a result of we’ve to journey bigger distances to incorporate sufficient neighbors.

*Estimating the PDF of X utilizing core distance the place Ok = 100.*

We attempt to estimate the PDF on our simulated knowledge X. Within the plots above, we use *1/core_distance* because the estimate of the PDF. As anticipated, the estimates are extremely correlated with the true PDF.

Whereas the earlier technique was delicate to each the dimensions of the info and the dimensions of the info set, this technique is especially delicate to the dimensions of the info set. In the event you scale every dimension equally, then all core distances will proportionally improve.

The important thing takeaway right here is:

- core distance = estimate of density
- (recall that) density = likelihood
**core distance = some estimate of the PDF**

So after we refer to a degree’s *core distance, *you may consider implicitly referring to the *PDF. *Filtering factors based mostly on the core distance is much like acquiring a level-set from the underlying distribution.

Every time we’ve *core_distance ≤ 𝜀*, there’s an implicit *pdf(x) ≥ 𝜆* taking place. There may be at all times a mapping between 𝜀 and 𝜆, and we are going to simply use image 𝜆 for each core distances and the PDF for simplicity.

**Discover the level-set and shade the areas**

Recall that within the earlier examples, we get a level-set from the PDF, and the ensuing areas are our clusters. This was straightforward as a result of a area was represented as some form. However after we are coping with factors, how do we all know what the completely different areas are?

We’ve got a small knowledge set on the left and its corresponding PDF on the suitable.

The PDF is just not “accurate.”

Step one is to seek out the level-set at some*𝜆*. We filter for areas *pdf(x) ≥ 𝜆* or filter for factors with *core_distance ≤ 𝜆*.

Now we have to discover the completely different areas. That is executed by connecting “nearby” factors to one another. “Nearby” is decided by the present density degree outlined by *𝜆** *and we are saying that two factors are close to sufficient if their Euclidean distance is lower than *𝜆**.*

We draw a sphere with radius *𝜆** *round every level.

We join the purpose to all factors inside its *𝜆**–*sphere. If two factors are linked, they belong to the identical area and may have the identical shade.

Do that for each level, and what we’re left with are a number of linked elements. These are our clusters.

That is the clustering you get at some level-set. We proceed to “lower the sea” and maintain observe as new clusters seem, some clusters develop and ultimately, some merge collectively.

**Decreasing the ocean degree**

Listed below are 4 visualizations the place we present 4 clusters at 4 completely different level-sets. We maintain observe of the completely different clusters in order that we will construct the hierarchy tree, which we’ve beforehand mentioned.

**Defining a brand new distance metric**

I wish to spotlight that factors will be contained in the *𝜆**–*sphere however they nonetheless received’t be linked. They should be included within the level-set first so *𝜆** *needs to be higher than its core distance for the purpose to be thought-about.

The worth of *𝜆** *at which two factors lastly linked will be interpreted as some new distance. For 2 factors to be linked they have to be:

- In a dense sufficient area
- Shut sufficient to one another

For* a* and *b*, we get the next inequalities when it comes to *𝜆* :

- core_distance(a) ≤
*𝜆* - core_distance(b) ≤
*𝜆* - distance(a, b) ≤
*𝜆*

(1) and (2) are for the *“In a dense enough region.”* (3) is for the* “Close enough to each other”*

Combining these inequalities, the smallest worth of *𝜆** *wanted to have the ability to immediately join a and b is

mutual_reachability_distance(a, b)=max( core_distance(a), core_distance(b), distance(a, b) )

That is known as the *mutual reachability distance *in HDBSCAN literature.

**Projection to ***𝜆**-space*

*Word: This “lambda space” is a time period not discovered within the literature. That is only for this weblog.*

As an alternative of utilizing Euclidean distance as our metric, we will now use the mutual reachability distance as our new metric. Utilizing it as a metric is equal to embedding the factors in some new metric area, which we might merely name *𝜆**-space*.*

*The repelling impact. Circles characterize the core distance of every level.*

This has an impact of spreading aside shut factors in sparse areas.

Because of the randomness of a random pattern, two factors will be shut to one another in a really sparse area. Nevertheless, we count on factors in sparse areas to be far aside from one another. By utilizing the mutual reachability distance, factors in sparse areas “repel other points” if they’re too near it, whereas factors in very dense areas are unaffected.

Beneath is a plot of the factors in 𝜆-space projected utilizing *Multidimensional Scaling *to point out its impact extra concretely.

We will see this repelling impact on the left and on prime. The 4 factors on the left are unfold out essentially the most as a result of they’re in a really sparse area.

**Constructing the hierarchy tree utilizing ****𝜆****-space**

Recall that to construct the hierarchy tree, we’ve the next steps:

- Set
*𝜆*to the smallest worth of the core distance - Filter for the factors within the level-set
- Join factors which are at most
*𝜆*items aside - Create new clusters, develop new clusters and merge clusters
- Set
*𝜆*to the subsequent smallest worth of the core distance and go to step (2)

Discover that when doing step (3), connecting two factors that already belong the identical linked element is ineffective. **What actually issues are the connections throughout clusters. The connection that might join two clusters correspond to the pair of factors from two completely different clusters with the smallest mutual reachability distance. **If we ignore these “useless” connections and solely notice the related ones, what we’re left with is an ordered listing of edges that at all times merge two clusters (linked elements).

*Connections dropping the “useless” edges… Is that a minimal spanning tree forming?*

This may sound difficult, however this may be simplified if we think about the *mutual reachability distance *as our new metric*:*

- Embed the factors in 𝜆-space and think about every level as a separate cluster
- Discover the shortest distance between two factors from two completely different clusters
- Merge the 2 clusters
- Return to step (2) till there is just one cluster

If this sounds acquainted, it’s the classical agglomerative clustering. That is simply the one linkage clustering in 𝜆-space!

Doing single linkage clustering in Euclidean area will be delicate to noise since noisy factors can kind spurious bridges throughout islands. By embedding the factors in 𝜆-space, the “repelling effect” makes the clustering way more strong to noise.

Single linkage clustering is conveniently equal to constructing a minimal spanning tree! So we will use all of the environment friendly methods of setting up the MST from graph concept.

*Minimal spanning tree from HDBSCAN.*

### Parameter Choice and Different Notes

Now we undergo notes concerning the principle parameters of HDBSCAN, *min_samples*, and *min_cluster_size*, and HDBSCAN normally.

**min_samples**

Recall our simulated knowledge X, the place we try to estimate the true PDF.

We attempt to estimate this utilizing the core distances, which is the space to the Ok-th nearest neighbor. The hyperparameter Ok is known as *min_samples* within the HDBSCAN API.

These are simply empirical observations from the simulated knowledge. We evaluate the plot we’ve above with the estimated PDF based mostly on completely different values of *min_samples*.

*Estimated PDF based mostly on a pattern dimension of 10000.*

As you may see, setting *min_samples* too low will end in very noisy estimates for the PDF for the reason that core distances change into delicate to native variations in density. This may result in spurious clusters, or some large cluster can find yourself fragmenting into many small clusters.

Setting *min_samples *too excessive can smoothen the PDF an excessive amount of. The finer particulars of the PDF are misplaced, however no less than you’ll be able to seize the larger extra international buildings of the underlying distribution. Within the instance above, the 2 small clusters had been “blurred” into only one cluster.

Figuring out the optimum worth for *min_samples* may be tough, and is finally data-dependent. Don’t be mislead by the excessive worth of *min_samples* that we’re utilizing right here. We used 1-d simulated knowledge that has easy variations in density throughout the area and solely 3 clusters. Typical real-world knowledge are wholly completely different traits, and smaller values for *min_samples* are sufficient.

The perception on the smoothing impact undoubtedly relevant in different datasets. Rising the worth of *min_samples* smoothens the estimated distribution in order that small peaks flattened, and we get to focus solely on the denser areas.

The only instinct for whatmin_samplesdoes is present a measure of how conservative you need you clustering to be. The bigger the worth ofmin_samplesyou present, the extra conservative the clustering – extra factors can be declared as noise, and clusters can be restricted to progressively extra dense areas. [7]

Be cautious, one potential side-effect of that is that it’d require longer operating occasions as a result of it’s important to discover extra “nearest neighbors” per level, and may require extra reminiscence.

**min_cluster_size**

Discover that the underlying PDF that we try to estimate could be very easy, however as a result of we try to estimate with a pattern, we count on some variance in our estimates.

This ends in a “bumpy” estimated PDF. Let’s give attention to a small space of the PDF as an example this.

What’s the impact of this bumpiness within the hierarchy tree? Effectively, this impacts the persistence measures of the clusters.

As a result of the little bumps are interpreted as mini-clusters, the persistence measures of the true clusters are divided into small segments. With out eradicating the bumps, the principle cluster might not be seen by the *extra of mass* technique. As an alternative of seeing a big easy mountain, it sees it as a group of quite a few mini-peaks.

To unravel this, we flatten these small bumps. That is carried out by “trimming” the clusters that aren’t sufficiently big within the hierarchy tree. The impact of that is that the *extra of mass* technique is now not distracted by the small bumps and might now see the principle cluster.

*min_cluster_size* dictates the utmost dimension of a “bump” earlier than it’s thought-about a peak. By rising the worth of *min_cluster_size* you might be, in a method, smoothening the estimated PDF in order that the true peaks of the distributions change into distinguished.

Since we’ve entry to the true PDF of X, we all know a superb worth of *min_samples*, which can end in a easy estimated PDF. If the estimates are good, then the *min_cluster_size* is just not as vital.

*Excellent condensed tree.*

Let’s say we used a smaller worth for *min_samples* and set it to 100. In the event you have a look at the PDF plot, it has the final form of the PDF however there’s a noticeable variance.

Although we all know there ought to solely be 3 peaks, we see a number of small peaks.

In the event you see a extra excessive model of this, maybe you may’t even see the colours of the bars anymore, then that might imply that the hierarchy tree is complicated. Perhaps it’s due to the variance of the estimates or possibly that’s actually how the info is structured. A method can deal with that is by rising *min_cluster_size*, which helps HDBSCAN simplify the tree and focus on greater extra international buildings.

**Knowledge Transformations**

Though we’ve established that HDBSCAN can discover clusters even with some arbitrary form, it doesn’t imply there isn’t any want for any knowledge transformations. It actually depends upon your use instances.

Scaling sure options can improve or lower the affect of that characteristic. Additionally, some transformations akin to *log* and sq. root* remodel *can change the form of the underlying distribution altogether.

**Assessing cluster high quality**

One other perception that needs to be famous is that classical methods of assessing and summarizing clusters might not be as significant when utilizing HDSCAN. Some metrics, such because the *silhouette rating, *work finest when the clusters are spherical.

For the “moons” dataset in *sklearn, Ok-means* has a greater silhouette rating than the results of HDBSCAN though we see that the clusters in HDBSCAN are higher.

This additionally applies to summarizing the clusters by getting the imply of all of the factors of the cluster. That is very helpful for Ok-means and is an effective prototype of the cluster. However for HDBSCAN, it may be problematic as a result of the clusters aren’t spherical.

*The hole circles are the “centroids” of the cluster.*

The imply level will be removed from the precise cluster! This may be very deceptive and might result in mistaken perception. You may wish to use one thing like a *medoid*, which is a degree that’s a part of the cluster that’s closest to all different factors. However watch out, you may lose an excessive amount of info to attempt to summarize a posh form with only one level in area.

This all actually depends upon what sort of clusters you favor and the underlying knowledge you might be processing. See Henning’s talk [5] for an summary of cluster evaluation.

### HDBSCAN Recap

We’re executed! We’ve got mentioned the core concepts of HDBSCAN! We’ll breeze by means of some particular implementation particulars as a recap.

A tough sketch of the HDBSCAN’s implementation goes as follows:

- Compute the core distances per factors
- Use the
*mutual_reachability(a, b*) as a distance metric for every a, b - Assemble a minimal spanning tree
- Prune the tree
- Select the clusters utilizing “
*extra of mass”*

**Compute the core distances per level**

This mainly is the way in which we “estimate the underlying pdf.”

**Minimal spanning tree utilizing mutual reachability distance**

The mutual reachability distance is a abstract at what degree of *𝜆* two factors collectively will join. That is what we use as a brand new metric.

Constructing the minimal spanning tree is equal to single linkage clustering in 𝜆-space, which is equal to iterating by means of each potential level-set and protecting observe of the clusters.

**Prune the ensuing tree**

Briefly, since what we’ve is simply an estimate PDF, we count on to have some variance. So even when the underlying distribution could be very easy, the estimated PDF will be very bumpy, and due to this fact end in a really difficult hierarchy tree.

We use the parameter *min_cluster_size* to smoothen the curves of the estimated distribution and in consequence, simplifying the tree into the *condensed_tree_.*

**Select the clusters utilizing “excess of mass”**

Utilizing the *condensed tree*, we will estimate the persistence of every cluster after which calculate for the optimum clustering, as mentioned within the earlier part.

### References

[1] https://hdbscan.readthedocs.io/en/latest/how_hdbscan_works.html

[2] McInnes, Leland, and John Healy. “Accelerated hierarchical density clustering.” *arXiv preprint arXiv:1705.07321* (2017).

[3] John Healy. HDBSCAN, Fast Density-Based Clustering, the How and the Why. PyData NYC. 2018

[4] Hastie, Trevor, Robert Tibshirani, and Jerome Friedman. *The weather of statistical studying: knowledge mining, inference, and prediction*. Springer Science & Enterprise Media, 2009.

[5] Christian Hennig. Assessing the quality of a clustering. PyData NYC. 2018.

[6] Alessandro Rinaldo. DeBaCl: a Density-based Clustering Algorithm and its Properties.

[7] https://hdbscan.readthedocs.io/en/latest/parameter_selection.html

[8] Campello, Ricardo JGB, Davoud Moulavi, and Jörg Sander. “Density-based clustering based on hierarchical density estimates.” *Pacific-Asia convention on information discovery and knowledge mining*. Springer, Berlin, Heidelberg, 2013.

Original. Reposted with permission.

**Bio:** JoseDaniel Berba used to work in cybersecurity, and is now a Machine Studying Researcher at Pondering Machines and a graduate scholar in knowledge science.

**Associated:**