ADVANCED DATA MINING UNIT 3



UNIT III
1):Density basedf methods:
In density-based clustering,[8] clusters are defined as areas of higher density than the remainder of the data set. Objects in these sparse areas - that are required to separate clusters - are usually considered to be noise and border points.
The most popular[9] density based clustering method is DBSCAN.
 It is a density-based clustering algorithm because it finds a number of clusters starting from the estimated density distribution of corresponding nodes. DBSCAN is one of the most common clustering algorithms and also most cited in scientific literature.[2] OPTICS can be seen as a generalization of DBSCAN to multiple ranges, effectively replacing the ε parameter with a maximum search radius.
DBSCAN requires two parameters: ε (eps) and the minimum number of points required to form a dense region[a] (minPts). It starts with an arbitrary starting point that has not been visited. This point's ε-neighborhood is retrieved, and if it contains sufficiently many points, a cluster is started. Otherwise, the point is labeled as noise. Note that this point might later be found in a sufficiently sized ε-environment of a different point and hence be made part of a cluster.
Advantages[edit]
1.     DBSCAN does not require one to specify the number of clusters in the data a priori, as opposed to k-means.
2.     DBSCAN can find arbitrarily shaped clusters. It can even find a cluster completely surrounded by (but not connected to) a different cluster. Due to the MinPts parameter, the so-called single-link effect (different clusters being connected by a thin line of points) is reduced.

2) Optics:
Ordering points to identify the clustering structure (OPTICS) is an algorithm for finding density-based clusters in spatial data. It was presented by Mihael Ankerst, Markus M. Breunig, Hans-Peter Kriegel and Jörg Sander.[1] Its basic idea is similar to DBSCAN,[2]but it addresses one of DBSCAN's major weaknesses: the problem of detecting meaningful clusters in data of varying density. In order to do so, the points of the database are (linearly) ordered such that points which are spatially closest become neighbors in the ordering. Additionally, a special distance is stored for each point that represents the density that needs to be accepted for a cluster in order to have both points belong to the same cluster.
Like DBSCAN, OPTICS requires two parameters:  , which describes the maximum distance (radius) to consider, and  , describing the number of points required to form a cluster. A point   is a core point if at least   points are found within its  -neighborhood  . Contrary to DBSCAN, OPTICS also considers points that are part of a more densely packed cluster, so each point is assigned a core distance that describes the distance to the  th closest point:

3) Denclue:

4)Gride based methods sting,cliques:
Basic Grid-based Algorithm
  1. Define a set of grid-cells
  2. Assign objects to the appropriate grid cell and compute the density of each cell.
  3. Eliminate cells, whose density is below a certain threshold t.
  4. Form clusters from contiguous (adjacent) groups of dense cells (usually minimizing a given objective function)
Advantages of Grid-based Clustering Algorithms
n  fast:
n  No distance computations
n  Clustering is performed on summaries and not individual objects; complexity is usually O(#-populated-grid-cells) and not O(#objects)
n  Easy to determine which clusters are neighboring
n  Shapes are limited to union of grid-cells
Grid-Based Clustering Methods
n  Using multi-resolution grid data structure
n  Clustering complexity depends on the number of populated grid cells and not on the number of objects in the dataset
n  Several interesting methods (in addition to the basic grid-based algorithm)
n  STING (a STatistical INformation Grid approach) by Wang, Yang and Muntz (1997)
n  CLIQUE: Agrawal, et al. (SIGMOD’98)

STING: A Statistical Information Grid Approach
n  Wang, Yang and Muntz (VLDB’97)
n  The spatial area area is divided into rectangular cells
n  There are several levels of cells corresponding to different levels of resolution
n  Each cell at a high level is partitioned into a number of smaller cells in the next lower level
n  Statistical info of each cell  is calculated and stored beforehand and is used to answer queries
n  Parameters of higher level cells can be easily calculated from parameters of lower level cell
n  count, mean, s, min, max
n  type of distribution—normal, uniform, etc.
n  Use a top-down approach to answer spatial data queries
Clicque:
n  Agrawal, Gehrke, Gunopulos, Raghavan (SIGMOD’98).
n  Automatically identifying subspaces of a high dimensional data space that allow better clustering than original space
n  CLIQUE can be considered as both density-based and grid-based
n  It partitions each dimension into the same number of equal length interval
n  It partitions an m-dimensional data space into non-overlapping rectangular units
n  A unit is dense if the fraction of total data points contained in the unit exceeds the input model parameter
n  A cluster is a maximal set of connected dense units within a subspace
n  Partition the data space and find the number of points that lie inside each cell of the partition.
n  Identify the subspaces that contain clusters using the Apriori principle
n  Identify clusters:
n  Determine dense units in all subspaces of interests
n  Determine connected dense units in all subspaces of interests.

5) Exeption maximization algorithm:
In statistics, an expectation–maximization (EM) algorithm is an iterative method for finding maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihoodevaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step.
The EM algorithm is used to find the maximum likelihood parameters of a statistical model in cases where the equations cannot be solved directly. Typically these models involve latent variables in addition to unknown parameters and known data observations. That is, either there are missing values among the data, or the model can be formulated more simply by assuming the existence of additional unobserved data points. For example, a mixture model can be described more simply by assuming that each observed data point has a corresponding unobserved data point, or latent variable, specifying the mixture component that each data point belongs to.
Finding a maximum likelihood solution typically requires taking the derivatives of the likelihood function with respect to all the unknown values — viz. the parameters and the latent variables — and simultaneously solving the resulting equations. In statistical models with latent variables, this usually is not possible. Instead, the result is typically a set of interlocking equations in which the solution to the parameters requires the values of the latent variables and vice-versa, but substituting one set of equations into the other produces an unsolvable equation.

Description[edit]

Given a statistical model consisting of a set   of observed data, a set of unobserved latent data or missing values  , and a vector of unknown parameters  , along with a likelihood function  , the maximum likelihood estimate (MLE) of the unknown parameters is determined by the marginal likelihood of the observed data
However, this quantity is often intractable (e.g. if   is a sequence of events, so that the number of values grows exponentially with the sequence length, making the exact calculation of the sum extremely difficult).
The EM algorithm seeks to find the MLE of the marginal likelihood by iteratively applying the following two steps:
Expectation step (E step): Calculate the expected value of the log likelihood function, with respect to the conditional distribution of   given   under the current estimate of the parameters  :
Maximization step (M step): Find the parameter that maximizes this quantity:
6) Clustering high-dimensional data:
Clustering high-dimensional data is the cluster analysis of data with anywhere from a few dozen to many thousands of dimensions. Such high-dimensional data spaces are often encountered in areas such as medicine, where DNA microarray technology can produce a large number of measurements at once, and the clustering of text documents, where, if a word-frequency vector is used, the number of dimensions equals the size of the vocabulary.

Problems[edit]

According to Kriegel, Kröger & Zimek (2009), four problems need to be overcome for clustering in high-dimensional data:
·         Multiple dimensions are hard to think in, impossible to visualize, and, due to the exponential growth of the number of possible values with each dimension, complete enumeration of all subspaces becomes intractable with increasing dimensionality. This problem is known as the curse of dimensionality.
·         The concept of distance becomes less precise as the number of dimensions grows, since the distance between any two points in a given dataset converges. The discrimination of the nearest and farthest point in particular becomes meaningless:

Subspace clustering[edit]

Example 2D space with subspace clusters
Subspace clustering is the task of detecting all clusters in all subspaces. This means that a point might be a member of multiple clusters, each existing in a different subspace. Subspaces can either be axis-parallel or affine. The term is often used synonymous with general clustering in high-dimensional data.

Projected clustering[edit]

Projected clustering seeks to assign each point to a unique cluster, but clusters may exist in different subspaces. The general approach is to use a special distance function together with a regularclustering algorithm.
7)Clustering graph n network data:
Clustering data is a fundamental task in machine
learning.


NETWORK

No comments: