AI技术百科
payititi-AI助手
2021-11-28 11:21:16
DBSCAN算法过程
DBSCAN算法的核心思想如下:从某个选定的核心点出发,不断向密度可达的区域扩张,从而得到一个包含核心点和边界点的最大化区域,区域中任意亮点密度相连
考虑数据集合。DBSCAN算法的目标是将数据集合 X 分成 K 个cluster(k由算法自动推断得到,无需事先指定)及噪音点组成,为此,引入cluster标记数组
由此,DBSCAN算法的目标就是生成标记数组,而 K 即为中互异的非负数的个数
输入:样本集D=(x1,x2,...,xm)
输出: 簇划分C.
可以看到,DBSCAN在不断发现新的核心点的同时,还通过直接密度可达,发现核心点邻域内的核心点,并把这些邻域内的核心点都归纳到第 k 个聚类中。而噪音点没每轮 k 聚类中会被全局过滤不会参与下一轮启发式发现中,只有边界点在下一次跌打中会被再次尝试检验是否能够成为新聚类的核心点
0x3:hello world!DBSCAN
# -*- coding:utf-8 -*-import numpy as npfrom sklearn.cluster import DBSCANfrom sklearn import metricsfrom sklearn.datasets.samples_generator import make_blobsfrom sklearn.preprocessing import StandardScaler # ############################################################################# # Generate sample data centers = [[1, 1], [-1, -1], [1, -1]] X, labels_true = make_blobs(n_samples=750, centers=centers, cluster_std=0.4, random_state=0) X = StandardScaler().fit_transform(X) # ############################################################################# # Compute DBSCAN db = DBSCAN(eps=0.3, min_samples=10).fit(X) print db.labels_ core_samples_mask = np.zeros_like(db.labels_, dtype=bool) print core_samples_mask core_samples_mask[db.core_sample_indices_] = True labels = db.labels_ # Number of clusters in labels, ignoring noise if present. n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0) print('Estimated number of clusters: %d' % n_clusters_) print("Homogeneity: %0.3f" % metrics.homogeneity_score(labels_true, labels)) print("Completeness: %0.3f" % metrics.completeness_score(labels_true, labels)) print("V-measure: %0.3f" % metrics.v_measure_score(labels_true, labels)) print("Adjusted Rand Index: %0.3f" % metrics.adjusted_rand_score(labels_true, labels)) print("Adjusted Mutual Information: %0.3f" % metrics.adjusted_mutual_info_score(labels_true, labels)) print("Silhouette Coefficient: %0.3f" % metrics.silhouette_score(X, labels)) # ############################################################################# # Plot result import matplotlib.pyplot as plt # Black removed and is used for noise instead. unique_labels = set(labels) colors = [plt.cm.Spectral(each) for each in np.linspace(0, 1, len(unique_labels))]for k, col in zip(unique_labels, colors): if k == -1: # Black used for noise. col = [0, 0, 0, 1] class_member_mask = (labels == k) xy = X[class_member_mask & core_samples_mask] plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col), markeredgecolor='k', markersize=14) xy = X[class_member_mask & ~core_samples_mask] plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col), markeredgecolor='k', markersize=6) plt.title('Estimated number of clusters: %d' % n_clusters_) plt.show()
Relevant link:
http://shiyanjun.cn/archives/1288.htmlhttps://en.wikipedia.org/wiki/DBSCAN https://www.cnblogs.com/hdu-2010/p/4621258.htmlhttp://blog.csdn.net/itplus/article/details/10088625https://www.cnblogs.com/pinard/p/6208966.htmlhttp://blog.csdn.net/xieruopeng/article/details/53675906http://www.cnblogs.com/aijianiula/p/4339960.htmlhttp://scikit-learn.org/stable/modules/generated/sklearn.cluster.DBSCAN.html