--- title: "kmed: Distance-Based K-Medoids" author: "Weksi Budiaji" date: "`r Sys.Date()`" output: html_vignette abstract: | The [**kmed**](https://cran.r-project.org/package=kmed) vignette consists of four sequantial parts of distance-based (k-medoids) cluster analysis. The first part is defining the distance. It has numerical, binary, categorical, and mixed distances. The next part is applying a clustering algorithm in the pre-defined distance. There are five k-medoids presented, namely the simple and fast k-medoids, k-medoids, ranked k-medoids, increasing number of clusters in k-medoids, and simple k-medoids. After the clustering result is obtained, a validation step is required. The cluster validation applies internal and relative criteria. The last part is visualizing the cluster result in a biplot or marked barplot. references: - id: ahmad title: A K-mean clustering algorithm for mixed numeric and categorical data author: - family: Ahmad given: A. - family: Dey given: L. container-title: Data and Knowledge Engineering volume: 63 URL: 'https://doi.org/10.1016/j.datak.2007.03.016' DOI: 10.1016/j.datak.2007.03.016 publisher: Elsevier page: 503-527 type: article-journal issued: year: 2007 month: 11 - id: budiaji1 title: Simple K-Medoids Partitioning Algorithm for Mixed Variable Data author: - family: Budiaji given: W. - family: Leisch given: F. container-title: Algorithms volume: 12 URL: 'https://www.mdpi.com/1999-4893/12/9/177' DOI: 10.3390/a12090177 publisher: MDPI page: 177 type: article-journal issued: year: 2019 month: 08 - id: budiaji2 title: Medoid-based shadow value validation and visualization author: - family: Budiaji given: W. container-title: International Journal of Advances in Intelligent Informatics volume: 5 URL: 'https://ijain.org/index.php/IJAIN/article/view/326' DOI: 10.26555/ijain.v5i2.326 publisher: Universitas Ahmad Dahlan page: 76-88 type: article-journal issued: year: 2019 month: 07 - id: dolnicar title: Evaluation of structure and reproducibility of cluster solutions using the bootstrap author: - family: Dolnicar given: S. - family: Leisch given: F. container-title: Marketing Letters volume: 21 publisher: Springer page: 83-101 type: article-journal issued: year: 2010 month: 3 - id: dolnicar2 title: Using graphical statistics to better understand market segmentation solutions author: - family: Dolnicar given: S. - family: Leisch given: F. container-title: International Journal of Market Research volume: 56 publisher: Sage page: 207-230 type: article-journal issued: year: 2014 month: 3 - id: gower title: A general coefficient of similarity and some of its properties author: - family: Gower given: J. container-title: Biometrics volume: 27 publisher: International Biometric Society page: 857-871 type: article-journal issued: year: 1971 month: 12 - id: hahsler title: Consensus clustering; Dissimilarity plots; A visual exploration tool for partitional clustering author: - family: Hahsler given: M. - family: Hornik given: K. container-title: Journal of Computational and Graphical Statistics volume: 20 URL: 'https://doi.org/10.1198/jcgs.2010.09139' DOI: 10.1198/jcgs.2010.09139 page: 335-354 type: article-journal issued: year: 2011 month: 8 - id: harikumar title: K-medoid clustering for heterogeneous data sets author: - family: Harikumar given: S. - family: PV given: S. container-title: JProcedia Computer Science volume: 70 URL: 'https://doi.org/10.1016/j.procs.2015.10.077' DOI: 10.1016/j.procs.2015.10.077 publisher: Elsevier page: 226-237 type: article-journal issued: year: 2015 - id: huang title: Clustering large data sets with mixed numeric and categorical values author: - family: Huang given: Z. container-title: The First Pacific-Asia Conference on Knowledge Discovery and Data Mining page: 21-34 type: chapter issued: year: 1997 - id: leisch title: Neighborhood graphs, stripes and shadow plots for cluster visualization author: - family: Leisch given: F. container-title: Statistics and Computing volume: 20 publisher: Springer page: 457-469 type: article-journal issued: year: 2010 month: 10 - id: leisch2 title: Visualizing cluster analysis and finite mixture models author: - family: Leisch given: F. container-title: Handbook of Data Visualization publisher: Springer Verlag page: 561-587 type: chapter issued: year: 2008 - id: monti title: Consensus clustering; A resampling-based method for class discovery and visualization of gene expression microarray data author: - family: Monti given: S. - family: Tamayo given: P. - family: Mesirov given: J. - family: Golub given: T. container-title: Machine Learning volume: 52 publisher: Springer page: 91-118 type: article-journal issued: year: 2003 month: 7 - id: park title: A simple and fast algorithm for K-medoids clustering author: - family: Park given: H. - family: Jun given: C. container-title: Expert Systems with Applications volume: 36 URL: 'https://doi.org/10.1016/j.eswa.2008.01.039' DOI: 10.1016/j.eswa.2008.01.039 issue: 2 publisher: Elsevier page: 3336-3341 type: article-journal issued: year: 2009 month: 2 - id: podani title: Extending gower's general coefficient of similarity to ordinal characters author: - family: Podani given: J. container-title: Taxon volume: 48 publisher: Wiley page: 331-340 type: article-journal issued: year: 1999 month: 5 - id: reynolds title: Clustering rules; A comparison of partitioning and hierarchical clustering algorithms author: - family: Reynolds given: A. P. - family: Richards given: G. - family: De La Iglesia given: B. - family: Rayward-Smith given: V. J. container-title: Journal of Mathematical Modelling and Algorithms volume: 5 publisher: Springer page: 475-504 type: article-journal issued: year: 2006 month: 3 - id: rousseeuw title: A graphical aid to the interpretation and validation of cluster analysis. author: - family: Rousseeuw given: P. J. container-title: Journal of Computational and Applied Mathematics volume: 20 URL: 'https://doi.org/10.1016/0377-0427(87)90125-7' DOI: 10.1016/0377-0427(87)90125-7 publisher: Elsevier page: 53-65 type: article-journal issued: year: 1987 month: 11 - id: wishart title: K-means clustering with outlier detection, mixed variables and missing values author: - family: Wishart given: D. container-title: Exploratory Data Analysis in Empirical Research; Proceedings of the 25th Annual Conference of the Gesellschaft fur Klassifikation e.v., University of Munich, March 14-16, 2001 volume: 27 publisher: Springer Berlin Heidelberg page: 216-226 type: chapter issued: year: 2003 - id: zadegan title: Ranked k-medoids A fast and accurate rank-based partitioning algorithm for clustering large datasets author: - family: Zadegan given: S.M.R - family: Mirzaie given: M. - family: Sadoughi given: F. container-title: Knowledge-Based Systems volume: 39 URL: 'https://doi.org/10.1016/j.knosys.2012.10.012' DOI: 10.1016/j.knosys.2012.10.012 publisher: Elsevier page: 133-143 type: article-journal issued: year: 2013 month: 2 - id: yu title: An improved K-medoids algorithm based on step increasing and optimizing medoids author: - family: Yu given: D. - family: Liu given: G. - family: Guo given: M. - family: Liu given: X. container-title: Expert Systems with Applications volume: 92 URL: 'https://doi.org/10.1016/j.eswa.2017.09.052' DOI: 10.1016/j.eswa.2017.09.052 publisher: Elsevier page: 464-473 type: article-journal issued: year: 2018 month: 2 vignette: > %\VignetteIndexEntry{kmed: Distance-Based K-Medoids} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ## 1. Introduction {#intro} The [**kmed**](https://cran.r-project.org/package=kmed) package is designed to analyse k-medoids based clustering. The features include: * distance computation: + numerical variables: - [Manhattan weighted by range](#mrw) - [squared Euclidean weighted by range](#ser) - [squared Euclidean weighted by squared range](#ser.2) - [squared Euclidean weighted by variance](#sev) - [unweighted squared Euclidean](#se) + binary or categorical variables: - [simple matching](#sm) - [co-occurrence](#cooc) + mixed variables: - [Gower](#gower) - [Wishart](#wishart) - [Podani](#podani) - [Huang](#huang) - [Harikumar and PV](#harikumar) - [Ahmad and Dey](#ahmad) * k-medoids algorithms: + [Simple and fast k-medoids](#sfkm) + [K-medoids](#km) + [Rank k-medoids](#rkm) + [Increasing number of clusters k-medoids](#inckm) + [simple k-medoids](#skm) * cluster validations: + internal criteria: - [Silhouette](#sil) - [Centroid-based shadow value](#csv) - [Medoid-based shadow value](#msv) + [relative criteria (bootstrap)](#boot) * Cluster visualizations: + [pca biplot](#biplot) + [marked barplot](#barplotnum) ## 2. Distance Computation ### 2.A. Numerical variables (`distNumeric`) The `distNumeric` function can be applied to calculate numerical distances. There are four distance options, namely Manhattan weighted by range (`mrw`), squared Euclidean weighted by range (`ser`), squared Euclidean weighted by squared range (`ser.2`), squared Euclidean weighted by variance (`sev`), and unweighted squared Euclidean (`se`). The `distNumeric` function provides `method` in which the desired distance method can be selected. The default `method` is `mrw`. The distance computation in a numerical variable data set is performed in the iris data set. An example of manual calculation of the numerical distances is applied for the first and second objects only to introduce what the `distNumeric` function does. ```{r} library(kmed) iris[1:3,] ``` #### 2.A.1. Manhattan weighted by range (`method = "mrw"`) {#mrw} By applying the `distNumeric` function with `method = "mrw"`, the distance among objects in the iris data set can be obtained. ```{r} num <- as.matrix(iris[,1:4]) rownames(num) <- rownames(iris) #calculate the Manhattan weighted by range distance of all iris objects mrwdist <- distNumeric(num, num) #show the distance among objects 1 to 3 mrwdist[1:3,1:3] ``` The Manhattan weighted by range distance between objects 1 and 2 is `0.2638889`. To calculate this distance, the range of each variable is computed. ```{r} #extract the range of each variable apply(num, 2, function(x) max(x)-min(x)) ``` Then, the distance between objects 1 and 2 is ```{r} #the distance between objects 1 and 2 abs(5.1-4.9)/3.6 + abs(3.5 - 3.0)/2.4 + abs(1.4-1.4)/5.9 + abs(0.2-0.2)/2.4 ``` which is based on the data ```{r, echo=FALSE} #data object num[1:2,] ``` [(Back to Intoduction)](#intro) #### 2.A.2. squared Euclidean weighted by range (`method = "ser"`){#ser} ```{r} #calculate the squared Euclidean weighthed by range distance of all iris objects serdist <- distNumeric(num, num, method = "ser") #show the distance among objects 1 to 3 serdist[1:3,1:3] ``` The squared Euclidean weighted by range distance between objects 1 and 2 is `0.11527778`. It is obtained by ```{r} #the distance between objects 1 and 2 (5.1-4.9)^2/3.6 + (3.5 - 3.0)^2/2.4 + (1.4-1.4)^2/5.9 + (0.2-0.2)^2/2.4 ``` [(Back to Intoduction)](#intro) #### 2.A.3. squared Euclidean weighted by squared range (`method = "ser.2"`) {#ser.2} ```{r} #calculate the squared Euclidean weighthed by squared range distance of #all iris objects ser.2dist <- distNumeric(num, num, method = "ser.2") #show the distance among objects 1 to 3 ser.2dist[1:3,1:3] ``` The squared Euclidean weighted by squared range distance between objects 1 and 2 is `0.04648920` that is computed by ```{r} (5.1-4.9)^2/3.6^2 + (3.5 - 3.0)^2/2.4^2 + (1.4-1.4)^2/5.9^2 + (0.2-0.2)^2/2.4^2 ``` where the data are ```{r, echo=FALSE} #data object num[1:2,] ``` [(Back to Intoduction)](#intro) #### 2.A.4. squared Euclidean weighted by variance (`method = "sev"`) ```{r} #calculate the squared Euclidean weighthed by variance distance of #all iris objects sevdist <- distNumeric(num, num, method = "sev") #show the distance among objects 1 to 3 sevdist[1:3,1:3] ``` The squared Euclidean weighted by variance distance between objects 1 and 2 is `1.3742671`. To compute this distance, the variance of each variable is calculated. ```{r} #calculate the range of each variable apply(num[,1:4], 2, function(x) var(x)) ``` Then, the distance between objects 1 and 2 is ```{r} (5.1-4.9)^2/0.6856935 + (3.5 - 3.0)^2/0.1899794 + (1.4-1.4)^2/3.1162779 + (0.2-0.2)^2/0.5810063 ``` [(Back to Intoduction)](#intro) #### 2.A.5. squared Euclidean (`method = "se"`){#se} ```{r} #calculate the squared Euclidean distance of all iris objects sedist <- distNumeric(num, num, method = "se") #show the distance among objects 1 to 3 sedist[1:3,1:3] ``` The squared Euclidean distance between objects 1 and 2 is `0.29`. It is computed by ```{r} (5.1-4.9)^2 + (3.5 - 3.0)^2 + (1.4-1.4)^2 + (0.2-0.2)^2 ``` [(Back to Intoduction)](#intro) ### 2.B. Binary or Categorical variables There are two functions to calculate the binary and categorical variables. The first is `matching` to compute the simple matching distance and the second is `cooccur` to calculate the co-occurrence distance. To introduce what these functions do, the `bin` data set is generated. ```{r} set.seed(1) bin <- matrix(sample(1:2, 4*2, replace = TRUE), 4, 2) rownames(bin) <- 1:nrow(bin) colnames(bin) <- c("x", "y") ``` #### 2.B.1. Simple matching (`matching`){#sm} The `matching` function calculates the simple matching distance between two data sets. If the two data sets are identical, the functions calculates the distance among objects within the data set. The simple matching distance is equal to the proportion of the mis-match categories. ```{r} bin #calculate simple matching distance matching(bin, bin) ``` As an example of the simple matching distance, the distance between objects 1 and 2 is calculated by ```{r} ((1 == 1) + (1 == 2))/ 2 ``` The distance between objects 1 and 2, which is `0.5`, is produced from *one mis-match* and *one match* categories from the two variables (`x` and `y`) in the `bin` data set. When `x1` is equal to `x2`, for instance, the score is 0. Meanwile, if `x1` is not equal to `x2`, the score is 1. These scores are also valid in the `y` variable. Hence, the distance between objects 1 and 2 is `(0+1)/2` that is equal to `1/2`. [(Back to Intoduction)](#intro) #### 2.B.2. Co-occurrence distance (`cooccur`){#cooc} The co-ocurrence distance [@ahmad; @harikumar] can be calculated via the `cooccur` function. To calculate the distance between objects, the distribution of the variables are taken into consideration. Compared to the simple matching distance, the co-occurrence distance redefines the score of **match** and **mis-match** categories such that they are *unnecessary* to be `0` and `1`, respectively. Due to relying on the distribution of all inclusion variables, the co-occurence distance of a data set with a single variable is **absent**. The co-occurrence distance of the `bin` data set is ```{r} #calculate co-occurrence distance cooccur(bin) ``` To show how co-occurrence distance is calculated, the distance between objects 1 and 2 is presented. ```{r} bin ``` **Step 1** Creating cross tabulations ```{r} #cross tabulation to define score in the y variable (tab.y <- table(bin[,'x'], bin[,'y'])) #cross tabulation to define score in the x variable (tab.x <- table(bin[,'y'], bin[,'x'])) ``` **Step 2** Calculating the column proportions of each cross tabulation ```{r} #proportion in the y variable (prop.y <- apply(tab.y, 2, function(x) x/sum(x))) #proportion in the x variable (prop.x <- apply(tab.x, 2, function(x) x/sum(x))) ``` **Step 3** Finding the maximum values for each row of the proportion ```{r} #maximum proportion in the y variable (max.y <- apply(prop.y, 2, function(x) max(x))) #maximum proportion in the x variable (max.x <- apply(prop.x, 2, function(x) max(x))) ``` **Step 4** Defining the scores of each variable The score is obtained by a summation of the maximum value subtracted and divided by a constant. The constant has a value depending on the number of inclusion variables. For the `bin` data set, the constant is `1` because both `x` and `y` variables are only depended on *one* other variable, i.e. `x` depends on the distribution of `y` and `y` relies on the distribution of `x`. ```{r} #score mis-match in the y variable (sum(max.y) - 1)/1 #score mis-match in the x variable (sum(max.x) - 1)/1 ``` It can be implied that the score for mis-match categories are `0.5` and `0.67` in the `x` and `y` variables, respectively. Note that the score for **match** categories is **alwalys `0`**. Thus, the distance between objects 1 and 2 is `0+0.6666667 = 0.6666667` and between objects 1 and 3 is `0.5+0.6666667 = 1.1666667` [(Back to Intoduction)](#intro) ### 2.C. Mixed variables (`distmix`) There are six available distance methods for a mixed variable data set. The `distmix` function calculates mixed variable distance in which it requires *column id* of each class of variables. The `mixdata` data set is generated to describe each method in the `distmix` function. ```{r} cat <- matrix(c(1, 3, 2, 1, 3, 1, 2, 2), 4, 2) mixdata <- cbind(iris[c(1:2, 51:52),3:4], bin, cat) rownames(mixdata) <- 1:nrow(mixdata) colnames(mixdata) <- c(paste(c("num"), 1:2, sep = ""), paste(c("bin"), 1:2, sep = ""), paste(c("cat"), 1:2, sep = "")) ``` #### 2.C.1 Gower (`method = "gower"`) {#gower} The `method = "gower"` in the `distmix` function calculates the @gower distance. The original Gower distance allows missing values, while it is not allowed in the `distmix` function. ```{r} mixdata ``` The Gower distance of the `mixdata` data set is ```{r} #calculate the Gower distance distmix(mixdata, method = "gower", idnum = 1:2, idbin = 3:4, idcat = 5:6) ``` As an example, the distance between objects 3 and 4 is presented. The range of each numerical variables is necessary. ```{r} #extract the range of each numerical variable apply(mixdata[,1:2], 2, function(x) max(x)-min(x)) ``` The Gower distance calculates the Gower similarity first. In the Gower similarity, the **mis-match** categories in the binary/ categorical variables are scored **0** and the **match** categories are **1**. Meanwhile, in the numerical variables, 1 is subtracted by a ratio between the absolute difference and its range. Then, the Gower similarity can be weighted by the number of variables. Thus, the Gower similarity between objects 3 and 4 is ```{r} #the Gower similarity (gowsim <- ((1-abs(4.7-4.5)/3.3) + (1-abs(1.4-1.5)/1.3) + 1 + 1 + 0 + 1)/ 6 ) ``` The Gower distance is obtained by subtracting 1 with the Gower similarity. The distance between objects 3 and 4 is then ```{r} #the Gower distance 1 - gowsim ``` [(Back to Intoduction)](#intro) #### 2.C.2 Wishart (`method = "wishart"`) {#wishart} The @wishart distance can be calculated via `method = "wishart"`. Although it allows missing values, it is again illegitimate in the `distmix` function. The Wishart distance for the `mixdata` is ```{r} #calculate the Wishart distance distmix(mixdata, method = "wishart", idnum = 1:2, idbin = 3:4, idcat = 5:6) ``` To calculate the Wishart distance, the variance of each numerical variable is required. It weighs the squared difference of a numerical variable. ```{r} #extract the variance of each numerical variable apply(mixdata[,1:2], 2, function(x) var(x)) ``` Meanwhile, the **mis-match** categories in the binary/ categorical variables are scored **1** and the **match** categories are **0**. Then, all score of the variables is added and squared rooted. Thus, the distance between objects 3 and 4 is ```{r} wish <- (((4.7-4.5)^2/3.42) + ((1.4-1.5)^2/0.5225) + 0 + 0 + 1 + 0)/ 6 #the Wishart distance sqrt(wish) ``` [(Back to Intoduction)](#intro) #### 2.C.3 Podani (`method = "podani"`) {#podani} The `method = "podani"` in the `distmix` function calculates the @podani distance. Similar to The Gower and Wishart distances, it allows missing values, yet it is not allowed in the `distmix` function. The Podani distance for the `mixdata` is ```{r} #calculate Podani distance distmix(mixdata, method = "podani", idnum = 1:2, idbin = 3:4, idcat = 5:6) ``` The Podani and Wishart distances are similar. They are different in the denumerator for the numerical variables. Instead of a variance, the Podani distance applies the squared range for a numerical variable. Unlike the Gower and Podani distances, the number of variables as a weight is absent in the Podani distance. Hence, the distance between objects 3 and 4 is ```{r} poda <- ((4.7-4.5)^2/3.3^2) + ((1.4-1.5)^2/1.3^2) + 0 + 0 + 1 + 0 #the Podani distance sqrt(poda) ``` which is based on data ```{r, echo=FALSE} #data object mixdata[3:4,] ``` [(Back to Intoduction)](#intro) #### 2.C.4 Huang (`method = "huang"`){#huang} The `method = "huang"` in the `distmix` function calculates the @huang distance. The Huang distance of the `mixdata` data set is ```{r} #calculate the Huang distance distmix(mixdata, method = "huang", idnum = 1:2, idbin = 3:4, idcat = 5:6) ``` The average standard deviation of the numerical variables is required to calculate the Huang distance. This measure weighs the binary/ categorical variables. ```{r} #find the average standard deviation of the numerical variables mean(apply(mixdata[,1:2], 2, function(x) sd(x))) ``` While the squared difference of the numerical variables is calculated, the **mis-match** categories are scored **1** and the **match** categories are **0** in the binary/ categorical variables. Thus, the distance between objects 3 and 4 is ```{r} (4.7-4.5)^2 + (1.4-1.5)^2 + 1.286083*(0 + 0) + 1.286083*(1 + 0) ``` [(Back to Intoduction)](#intro) #### 2.C.5 Harikumar and PV (`method = "harikumar"`) {#harikumar} The @harikumar distance can be calculated via `method = "harikumar"`. The Harikumar and PV distance for the `mixdata` is ```{r} #calculate Harikumar-PV distance distmix(mixdata, method = "harikumar", idnum = 1:2, idbin = 3:4, idcat = 5:6) ``` The Harikumar and PV distance requires an absolute difference in the numerical variables and unweighted simple matching, i.e. Hamming distance, in the binary variables. For the categorical variables, it applies co-occurrence distance. The co-occurence distance in the categorical variables is (for manual calculation see [co-occurrence](#cooc) subsection) ```{r} cooccur(mixdata[,5:6]) ``` Hence, the distance between objects 1 and 3 is ```{r} abs(4.7-4.5) + abs(1.4-1.5) + (0 + 0) + (0.5) ``` where the data are ```{r, echo=FALSE} #data object mixdata[3:4,] ``` [(Back to Intoduction)](#intro) #### 2.C.6 Ahmad and Dey (`method = "ahmad"`) {#ahmad} The `method = "ahmad"` in the `distmix` function calculates the @ahmad distance. The Ahmad and Dey distance of the `mixdata` data set is ```{r} #calculate Ahmad-Dey distance distmix(mixdata, method = "ahmad", idnum = 1:2, idbin = 3:4, idcat = 5:6) ``` The Ahmad and dey distance requires a squared difference in the numerical variables and co-occurrence distance for both the binary and categorical variables. The co-occurrence distance in the `mixdata` data set is ```{r} cooccur(mixdata[,3:6]) ``` Thus, the distance between objects 2 and 3 is ```{r} (1.4-4.7)^2 + (0.2-1.4)^2 + (2)^2 ``` which is based on the data ```{r, echo=FALSE} #data object mixdata[2:3,] ``` [(Back to Intoduction)](#intro) ## 3. K-medoids algorithms There are some k-medoids algorithms available in this package. They are the simple and fast k-medoids (`fastkmed`), k-medoids, ranked k-medoids (`rankkmed`), and increasing number of clusters k-medoids (`inckmed`). All algorithms have a list of results, namely the cluster membership, id medoids, and distance of all objects to their medoid. In this section, the algorithms are applied in the `iris` data set by applying the `mrw` distance (see [Manhattan weighted by range](#mrw)). The number of clusters in this data set is 3. ### 3.A. Simple and fast k-medoids algorithm (`fastkmed`) {#sfkm} The simple and fast k-medoid (SFKM) algorithm has been proposed by @park. The `fastkmed` function runs this algorithm to cluster the objects. The compulsory inputs are a distance matrix or distance object and a number of clusters. Hence, the SFKM algorithm for the `iris` data set is ```{r} #run the sfkm algorihtm on iris data set with mrw distance (sfkm <- fastkmed(mrwdist, ncluster = 3, iterate = 50)) ``` Then, a classification table can be obtained. ```{r} (sfkmtable <- table(sfkm$cluster, iris[,5])) ``` Applying the SFKM algorithm in `iris` data set with the Manhattan weighted by range, the misclassification rate is ```{r} (3+11)/sum(sfkmtable) ``` [(Back to Intoduction)](#intro) ### 3.B. K-medoids algorithm {#km} @reynolds has been proposed a k-medoids (KM) algorithm. It is similar to the [SFKM](#sfkm) such that the `fastkmed` can be applied. The difference is in the initial medoid selection where the KM selects the initial medoid randomly. Thus, the KM algorithm for the `iris` data set by setting the `init` is ```{r} #set the initial medoids set.seed(1) (kminit <- sample(1:nrow(iris), 3)) #run the km algorihtm on iris data set with mrw distance (km <- fastkmed(mrwdist, ncluster = 3, iterate = 50, init = kminit)) ``` The classification table of the KM algorithm is ```{r} (kmtable <- table(km$cluster, iris[,5])) ``` with the misclassification rate ```{r} (3+9)/sum(kmtable) ``` Compared to the [SFKM](#sfkm) algorithm, which has `9.33`\% misclassification, the misclassification of the KM algorithm is slightly better (`8`\%). [(Back to Intoduction)](#intro) ### 3.C. Rank k-medoids algorithm (`rankkmed`) {#rkm} A rank k-medoids (RKM) has been proposed by @zadegan. The `rankkmed` function runs the RKM algorithm. The `m` argument is introduced to calculate a hostility score. The `m` indicates how many closest objects is selected. The selected objects as initial medoids in the RKM is randomly assigned. The RKM algorithm for the `iris` data set by setting `m = 10` is then ```{r} #run the rkm algorihtm on iris data set with mrw distance and m = 10 (rkm <- rankkmed(mrwdist, ncluster = 3, m = 10, iterate = 50)) ``` Then, a classification table is attained by ```{r} (rkmtable <- table(rkm$cluster, iris[,5])) ``` The misclassification proportion is ```{r} (3+3)/sum(rkmtable) ``` With `4`\% misclassification rate, the RKM algorithm is the best among the three previous algorithms. [(Back to Intoduction)](#intro) ### 3.D. Increasing number of clusters k-medoids algorithm (`inckmed`) {#inckm} @yu has been proposed an increasing number of clusters k-medoids (INCKM) algorithm. This algorithm is implemented in the `inckmed` function. The `alpha` argument indicates a stretch factor to select the initial medoids. The [SFKM](#sfkm), [KM](#km) and [INCKM](#inckm) are similar algorithm with a different way to select the initial medoids. The INCKM algorithm of the `iris` data set with `alpha = 1.1` is ```{r} #run the inckm algorihtm on iris data set with mrw distance and alpha = 1.2 (inckm <- inckmed(mrwdist, ncluster = 3, alpha = 1.1, iterate = 50)) ``` Then, the classification table can be attained. ```{r} (inckmtable <- table(inckm$cluster, iris[,5])) ``` The misclassification rate is ```{r} (9+3)/sum(inckmtable) ``` The algorithm has `8`\% misclassification rate such that the [RKM](#rkm) algorithm performs the best among the four algorithms in the `iris` data set with the `mrw` distance. [(Back to Intoduction)](#intro) ### 3.E. Simple k-medoids algorithm (`skm`) {#skm} The simple k-medoid (SKM) algorithm has been proposed by @budiaji1. The `skm` function runs this algorithm to cluster the objects. The compulsory inputs are a distance matrix or distance object and a number of clusters. Hence, the SKM algorithm for the `iris` data set is ```{r} #run the sfkm algorihtm on iris data set with mrw distance (simplekm <- skm(mrwdist, ncluster = 3, seeding = 50)) ``` Then, a classification table can be obtained. ```{r} (simpletable <- table(simplekm$cluster, iris[,5])) ``` Applying the SKM algorithm in `iris` data set with the Manhattan weighted by range, the misclassification rate is ```{r} (4+11)/sum(simpletable) ``` [(Back to Intoduction)](#intro) ## 4. Cluster validation The clustering algorithm result has to be validated. There are two types of validation implemented in the [**kmed**](https://cran.r-project.org/package=kmed) package. They are internal and relative criteria validations. ### 4.A. Internal criteria ### 4.A.1. Silhouette (`sil`) {#sil} @rousseeuw has proposed a silhouette index as an internal measure of validation. It is based on the average distance of objects within a cluster and between the nearest cluster. The `sil` function calculates the silhouette index of clustering result. The arguments are a distance matrix or distance object, id medoids, and cluster membership. It produce a list of silhouette indices and sihouette plots. The silhouette index and plot of the best clustering result of `iris` data set via [RKM](#rkm) is presented. ```{r} #calculate silhouette of the RKM result of iris data set siliris <- sil(mrwdist, rkm$medoid, rkm$cluster, title = "Silhouette plot of Iris data set via RKM") ``` The silhouette index of each object can be obtained by ```{r} #silhouette indices of objects 49 to 52 siliris$result[c(49:52),] ``` Then, the plot is presented by ```{r, fig.width=7, fig.asp=0.8} siliris$plot ``` [(Back to Intoduction)](#intro) ### 4.A.2. Centroid-based shadow value (`csv`) {#csv} An other way to measure internal validation with its corresponding plot is by presenting the centroid-based shadow value [@leisch]. The `csv` function calculates and plots the centroid-base shadow value of each object, which is based on the first and second closest medoids. The centroid of the original version of the csv is replaced by medoids in the `csv` function to adapt the k-medoids algorithm. The required arguments in the `csv` function is identical to the [silhouette (`sil`)](#sil) function. Thus, the shadow value and plot of the best clustering result of `iris` data set via [RKM](#rkm) can be obtained by ```{r} #calculate centroid-base shadow value of the RKM result of iris data set csviris <- csv(mrwdist, rkm$medoid, rkm$cluster, title = "CSV plot of Iris data set via RKM") ``` The centroid-based shadow values of objects 49 to 52, for instance, are presented by ```{r} #shadow values of objects 49 to 52 csviris$result[c(49:52),] ``` The centroid-based shadow value plot is also produced. ```{r, fig.width=7, fig.asp=0.8} csviris$plot ``` [(Back to Intoduction)](#intro) ### 4.A.3. Medoid-based shadow value (`msv`) {#msv} An other way to measure internal validation by combining silhoutte and csv properties is by a medoid-based shadow value (msv) [@budiaji2]. The `msv` function calculates and plots the medoid-based shadow value of each object, which is based on the first and second closest medoids. The required arguments in the `msv` function is identical to the [centroid-based shadow value (`csv`)](#csv) function. Thus, the medoid-based shadow value and plot of the best clustering result of `iris` data set via [RKM](#rkm) can be obtained by ```{r} #calculate medoid-based shadow value of the RKM result of iris data set msviris <- msv(mrwdist, rkm$medoid, rkm$cluster, title = "MSV plot of Iris data set via RKM") ``` The medoid-based shadow values of objects 49 to 52, for instance, are presented by ```{r} #Medoid-based shadow values of objects 49 to 52 msviris$result[c(49:52),] ``` The shadow value plot is also produced. ```{r, fig.width=7, fig.asp=0.8} msviris$plot ``` [(Back to Intoduction)](#intro) ### 4.B. Relative criteria {#boot} The relative criteria evaluate a clustering algorithm result by applying re-sampling strategy. Thus, a bootstrap strategy can be applied. It is expected that the result of the cluster bootstraping is robust over all replications [@dolnicar]. There are three steps to validate the cluster result via the boostraping strategy. #### Step 1 Creating a matrix of bootstrap replicates {#stp1} To create a matrix of bootstrap replicates, the `clustboot` function can be applied. There are five arguments in the `clustboot` function with the `algorithm` argument being the most important. The `algorithm` argument is the argument for a clustering algorithm that a user wants to evaluate. It has to be a *function*. When the [RKM](#rkm) of `iris` data set is validated, for instance, the RKM function, which is required as an input in the `algorithm` argument, is ```{r} #The RKM function for an argument input rkmfunc <- function(x, nclust) { res <- rankkmed(x, nclust, m = 10, iterate = 50) return(res$cluster) } ``` When a function is created, it has to have two input arguments. They are `x` (a distance matrix) and `nclust` (a number of clusters). The output, on the other hand, is *a vector* of cluster membership (`res$cluster`). Thus, the matrix of bootstrap replicates can be produced by ```{r} #The RKM algorthim evaluation by inputing the rkmfunc function #in the algorithm argument rkmbootstrap <- clustboot(mrwdist, nclust=3, nboot=50, algorithm = rkmfunc) ``` with the objects 1 to 4 on the first to fifth replications being ```{r} rkmbootstrap[1:4,1:5] ``` The `rkmbootstrap` is a matrix of bootrstrap replicates with a dimension of *150 x 50*, i.e. *n x b*, where *n* is the number of objects and *b* is the number of bootstrap replicates. **Note** that the default evaluated algorithm is the [SFKM](#sfkm) algorithm such that if a user ignores the `algorithm` argument, the matrix of bootstrap replicates can still be produced. However, it misleads because it does not evaluate the user's own algorithm. #### Step 2 Transforming the bootstrap matrix into a consensus matrix {#stp2} The matrix of bootstrap replicates produced by the `clustboot` in the [step 1](#stp1) can be transformed into a consensus matrix with a dimension of *n x n* via the `consensusmatrix` function. An element of the consensus matrix in row *i* dan column *j* is an agreement value between objects *i* and *j* to be in the same cluster when they are taken as a sample at the same time [@monti]. However, it requires an algorithm to order the objects in such a way that objects in the same cluster are close to each other. The `consensusmatrix` function has the `reorder` argument to comply this task. It is similar to the `algorithm` argument in the `clustboot` function in the [step 1](#stp1) where the `reorder` has to be a function that has two arguments and a vector of output. Transforming the `rkmbootstrap` into a consensus matrix via the ward linkage algorithm to oder the objects, for example, can obtained by ```{r} #The ward function to order the objects in the consensus matrix wardorder <- function(x, nclust) { res <- hclust(as.dist(x), method = "ward.D2") member <- cutree(res, nclust) return(member) } consensusrkm <- consensusmatrix(rkmbootstrap, nclust = 3, wardorder) ``` The first to fourth rows and columns can be displayed as ```{r} consensusrkm[c(1:4),c(1:4)] ``` #### Step 3 Visualizing the consensus matrix in a heatmap The ordered consensus matrix in the [step 2](#stp2) can be visualized in a heatmap applying the `clustheatmap` function. The agreement indices in the consensus matrix can be transformed via a non-linear transformation [@hahsler]. Thus, the `consensusrkm` can visualize into ```{r, fig.width=7, fig.asp=0.8} clustheatmap(consensusrkm, "Iris data evaluated by the RKM, ordered by Ward linkage") ``` [(Back to Intoduction)](#intro) ## 5. Cluster visualization A cluster visualization of the clustering result can enhance the data structure understanding. The biplot and marked barplot are presented to visualize the clustering result. ### A. Biplot {#biplot} The `pcabiplot` function can be applied to plot a clustering result from a numerical data set. The numerical data set has to be converted into a principle component object via the `prcomp` function. The `x` and `y` axes in the plot can be replaced by any component of the principle components. The colour of the objects can be adjusted based on the cluster membership by supplying a vector of membership in the `colobj` argument. The `iris` data set can be plotted in a pca biplot with the colour objects based on the [RKM](#rkm) algorithm result. ```{r, fig.width=7, fig.asp=0.8} #convert the data set into principle component object pcadat <- prcomp(iris[,1:4], scale. = TRUE) #plot the pca with the corresponding RKM clustering result pcabiplot(pcadat, colobj = rkm$cluster, o.size = 2) ``` The second principle component can be replaced by the third principle component. ```{r, fig.height=3, fig.width=7} pcabiplot(pcadat, y = "PC3",colobj = rkm$cluster, o.size = 1.5) ``` [(Back to Intoduction)](#intro) ### B. Marked barplot {#barplotnum} A marked barplot has been proposed by @dolnicar2; @leisch2 where the mark indicates a significant difference between the cluster's mean and population's mean in each variable. The `barplot` function creates a barplot of each cluster with a particular significant level. The layout of the barplot is set in the `nc` argument. The barplot of `iris` data set partitioned by the [RKM](#rkm) algorithm is ```{r, fig.width=7, fig.asp=0.8} barplotnum(iris[,1:4], rkm$cluster, alpha = 0.05) ``` while the layout is changed into 2 columns and the alpha is set into 1\%, it becomes ```{r, fig.width=7, fig.asp=0.8} barplotnum(iris[,1:4], rkm$cluster, nc = 2, alpha = 0.01) ``` [(Back to Intoduction)](#intro) *** # References {#ref}