摘要:本文是企业管理中的聚类分析论文,聚类分析是能够解决我们的研究问题一种技术,它能够使用户确定自然基础结构的复杂数据集。在这样做时,我们可以分辨出企业的类型和特定的董事组织、公司的质量水平。
rectors will appear to be different when one sits on a firm with large market capitalization while the other sits on a small market capitalization firm. In our study, we attempt to cluster directors by the latter method as it is reasonable to expect that individual director characteristics (e.g. previous experience) have been considered by companies themselves when they invite those directors to join their boards. In this way, our cluster analysis also takes in consideration individual director characteristics. In defining clustering variables, we follow a deductive approach in which the suitability of variables is theory-based (Ketchen et al., 1993). This is often considered a better approach to guide variable choices as irrelevant variables may affect the validity of a clusters solution (Punj and Stewart, 1983). We employ the seven firm characteristics previously discussed in Chapter Three as our clustering variables.
As cluster analysis groups objects by maximizing distance between groups along all clustering variables, variables with larger ranges tend to dominate (i.e. given more weight) the clusters solution in comparison to those variables with smaller ranges (Hair et al., 1992). Considering that the range of firm size (measured by market capitalization) is several times larger than the range of ownership structure variables, firm size variables are likely to dominate our clusters solution. Although some studies claim that standardization of variables to equally scaled data is not necessary (Dolnicar, 2002) and may distort results, we choose to standardize the variables by Z-scores to avoid clusters solutions that are dominated by market capitalization.
1.1.1.1.Cluster Analysis Algorithms, Cluster Validation and Consensus Clustering
In the following sections, we describe the three algorithms employed to perform cluster analysis. Subsequently, we discuss the cluster validations steps taken and illustrate the use of consensus clustering to find a single consensus solution. We perform the following steps on three sets of data; all directors, outside directors and independent directors as extant research have found that dissimilar behaviours between inside and outside directors can lead to contrasting results in studies (Mak et al., 2003 and Fich and Shivdasani, 2006).
1.1.1.1.1.Cluster Analysis Algorithms
Hierarchical clustering
Hierarchical clustering can be either agglomerative or divisive. Divisive methods begin with every object in one cluster and end when each object is in individual clusters. Divisive methods are not readily available and are thus rarely cited. In this
thesis, we employ the more commonly used agglomerative method and discuss the basic process below.
The basic process of an agglomerative hierarchical clustering for N objects is (Johnson, 1967):
Assign each object to its own cluster, thus the method begins with N clusters.
The closest (most similar) pair of clusters are merged into a single cluster, thus there are now (N-1) clusters.
Recompute distances (similarities) between the new cluster and the old clusters individually.
Iterate through steps 2 and 3 until there is only a single cluster left.
In step 2, several methods to compute distances exist, and they are referred to as linkage function. For instance, single linkage clustering
本论文由英语论文网提供整理,提供论文代写,英语论文代写,代写论文,代写英语论文,代写留学生论文,代写英文论文,留学生论文代写相关核心关键词搜索。