Applying the K-Means algorithm to a simple dataset demonstrates its function in unsupervised learning and clustering. This technique groups data points without requiring prior knowledge of their "true" labels, instead relying solely on the positions of the points themselves. We will walk through this application to observe its practical effect.We'll use a common approach in introductory examples: generating synthetic data. This is helpful because we can create data where we know there are distinct groups, making it easier to visually check if K-Means does a reasonable job finding them. We will use Python along with popular libraries like Scikit-learn for the K-Means algorithm and Plotly for visualization.Setting Up Our ToolsFirst, ensure you have the necessary libraries. If you're working in an environment like Google Colab or Anaconda, these might already be installed. If not, you'd typically install them using pip:pip install scikit-learn numpy plotlyFor this example, we need numpy for numerical operations (especially creating our data), sklearn.cluster for the KMeans algorithm, and plotly.graph_objects for creating interactive plots suitable for the web.Generating and Visualizing Simple DataLet's create some 2-dimensional data that clearly falls into three groups or "blobs". Scikit-learn provides a handy function make_blobs for exactly this purpose.import numpy as np import plotly.graph_objects as go from sklearn.datasets import make_blobs # Generate synthetic data with 3 distinct clusters X, _ = make_blobs(n_samples=150, # Total number of points centers=3, # Number of clusters to generate cluster_std=0.8,# Standard deviation of the clusters (spread) random_state=42)# For reproducibility # We ignore the second output (y), which are the true labels # X is now a NumPy array with 150 rows and 2 columns (our features) # Let's visualize the raw data before clustering fig_raw = go.Figure(data=[go.Scatter( x=X[:, 0], y=X[:, 1], mode='markers', marker=dict(color='#495057', size=7, opacity=0.8) # Use gray for raw data )]) fig_raw.update_layout( title='Synthetic Data Points (Before Clustering)', xaxis_title='Feature 1', yaxis_title='Feature 2', width=600, height=450, plot_bgcolor='#f8f9fa' # Light background ) # Display the plot (In a notebook/web environment) # fig_raw.show()Before applying K-Means, it's always a good idea to look at your data. Here’s the plot generated by the code above:{"layout": {"title": "Synthetic Data Points (Before Clustering)", "xaxis": {"title": "Feature 1"}, "yaxis": {"title": "Feature 2"}, "width": 600, "height": 450, "plot_bgcolor": "#f8f9fa"}, "data": [{"type": "scatter", "x": [9.40, -3.89, -2.17, 0.63, 3.09, 8.67, -4.03, -1.82, 8.68, 8.57, 2.33, -3.50, -3.61, 2.29, 9.27, -1.22, -4.28, 0.01, -1.36, 2.33, 8.59, 10.03, -2.82, -3.24, 9.71], "y": [0.32, -2.38, 8.17, 7.79, 7.05, -0.55, -1.10, 8.97, 0.71, -0.09, 8.61, -1.62, -2.41, 6.90, -0.35, 8.15, -2.23, 7.39, 7.78, 7.33, 0.39, 0.19, 7.26, -2.19, -0.47], "mode": "markers", "marker": {"color": "#495057", "size": 7, "opacity": 0.8}}]}The synthetic data points plotted in 2D space. We can visually identify three potential groups.As you can see, the points form three reasonably well-separated groups. Our eyes can perform this clustering task quite easily for this simple 2D data. Let's see if K-Means can replicate this.Applying K-MeansNow we'll use Scikit-learn's KMeans implementation. We need to tell the algorithm how many clusters ($K$) to look for. Since we generated the data with 3 centers, let's set $K=3$.from sklearn.cluster import KMeans # Initialize the K-Means algorithm # n_clusters is the most important parameter: the number of clusters (K) # n_init='auto' uses an intelligent default for running the algorithm multiple times # with different centroid seeds to improve results. # random_state ensures reproducibility of the initialization. kmeans = KMeans(n_clusters=3, n_init='auto', random_state=42) # Fit the algorithm to the data X # This is where K-Means iterates: assigning points to clusters and updating centroids. kmeans.fit(X) # After fitting, the model contains the results: # 1. Cluster assignments for each data point: cluster_labels = kmeans.labels_ # 2. Coordinates of the final cluster centers (centroids): centroids = kmeans.cluster_centers_ # print("Cluster labels assigned to each point:", cluster_labels) # print("Coordinates of final centroids:\n", centroids)The fit() method runs the K-Means algorithm on our data X. The algorithm iteratively assigns each point to the nearest centroid and then recalculates the centroid positions based on the assigned points, until the centroids stabilize or a maximum number of iterations is reached.The results are stored in the kmeans object:kmeans.labels_: An array where the $i$-th element indicates the cluster index (0, 1, or 2 in this case) assigned to the $i$-th data point in X.kmeans.cluster_centers_: A 2D array containing the final coordinates of the centroids for each cluster.Visualizing the K-Means ResultsNow, let's visualize the same data points, but this time color them according to the cluster labels assigned by K-Means. We'll also plot the final centroids found by the algorithm.# Define colors for the clusters - using the suggested palette cluster_colors = ['#4263eb', '#12b886', '#fd7e14'] # Indigo, Teal, Orange centroid_color = '#f03e3e' # Red for centroids # Create the plot fig_clustered = go.Figure() # Add data points, colored by cluster label for i in range(3): # Loop through clusters 0, 1, 2 points_in_cluster = X[cluster_labels == i] fig_clustered.add_trace(go.Scatter( x=points_in_cluster[:, 0], y=points_in_cluster[:, 1], mode='markers', marker=dict(color=cluster_colors[i], size=7, opacity=0.8), name=f'Cluster {i}' )) # Add the centroids fig_clustered.add_trace(go.Scatter( x=centroids[:, 0], y=centroids[:, 1], mode='markers', marker=dict(color=centroid_color, size=14, symbol='x', line=dict(width=3)), name='Centroids' )) fig_clustered.update_layout( title=f'K-Means Clustering Results (K=3)', xaxis_title='Feature 1', yaxis_title='Feature 2', width=600, height=450, plot_bgcolor='#f8f9fa', legend_title_text='Legend' ) # Display the plot # fig_clustered.show()Here's the resulting plot:{"layout": {"title": "K-Means Clustering Results (K=3)", "xaxis_title": "Feature 1", "yaxis_title": "Feature 2", "width": 600, "height": 450, "plot_bgcolor": "#f8f9fa", "legend_title_text": "Legend"}, "data": [{"type": "scatter", "x": [-3.8944466, -4.0307817, -3.49879463, -3.60856438, -4.28364986, -2.82413277, -3.24349636, -3.86908011, -3.89519813, -3.61645308, -3.09745833, -3.09142537, -4.11892455, -3.47499921, -3.41549603, -3.61725122, -3.38299587, -4.25895783, -3.05677876, -2.99166233, -3.53489394, -3.18922301, -3.05746663, -2.97908147, -4.00345846, -4.43579128, -3.88682853, -4.23528217, -3.01684953, -4.08703417, -3.11201568, -3.09844608, -3.94129376, -3.2776345, -3.68351318, -3.20806829, -3.19362707, -2.67867522, -2.58810129, -3.59801299, -3.09005407, -2.80153218, -3.25816333], "y": [-2.38218416, -1.10321427, -1.61918697, -2.40769042, -2.2290819, 7.25888311, -2.1941847, -1.59177754, -1.60287089, -1.60081525, -1.56254636, -1.63656999, -1.46774876, -1.65414337, -2.44838893, -1.45309417, -2.42025743, -2.11475232, -2.07976469, -2.68137258, -0.8474635, -1.10928646, -2.00044627, -2.53353386, -1.6835102, -1.34812434, -1.60279546, -1.44304745, -2.57820237, -1.17392271, -1.49738819, -2.62645786, -2.30683014, -2.61503097, -1.04255853, -0.89907947, -1.37764128, 8.34059978, 7.76004224, -1.71469683, -1.28558574, 8.61409983, -1.90397304], "mode": "markers", "marker": {"color": "#4263eb", "size": 7, "opacity": 0.8}, "name": "Cluster 0"}, {"type": "scatter", "x": [-2.17399939, 0.63447058, 3.0902331, -1.82419002, 2.32643218, 2.29491571, -1.21909073, 0.0112588, -1.36399585, 2.33003117, -2.27073282, 2.51370938, 2.93376674, -2.56506782, 1.85706746, 2.20446538, 0.94870871, -2.22214035, 2.9651487, 2.62396722, 2.68645927, 2.21014116, 1.62583857, 1.23435846, -2.7734411, 2.41259211, -2.34841319, -2.57907391, 2.89968801, -2.34927089, 1.68502645, 2.35518894, -1.91377378, 1.3525249, 2.72385102, 1.45896167, 1.98292497, 2.60171305, 2.06567807, -1.27533232, -1.8886289, 2.48017873, 2.62140818, 2.91030311, -2.1229853, 1.68181629, -1.68185409, 2.45147835, -2.57485385, 1.97969605, -1.39757857, 2.47611934, 2.85306097, 1.55067484, 1.56174678, 2.9917874, 1.32281877], "y": [8.16659979, 7.79143032, 7.05315316, 8.96916688, 8.60753479, 6.9011889, 8.14757385, 7.39361353, 7.77945849, 7.33166537, 9.12352141, 7.09137364, 7.36820713, 8.5053366, 6.57942344, 8.1834023, 7.23553302, 7.32431041, 8.33589429, 7.95017784, 7.99216678, 7.58311015, 7.73518309, 7.20993471, 8.14907471, 7.58580846, 9.09953083, 7.87177664, 6.61767197, 8.60353306, 6.82982971, 8.43919954, 9.18548535, 7.73797418, 7.14496464, 8.35835846, 7.07295833, 8.27544208, 6.61151908, 7.48901314, 8.62308333, 7.67681382, 8.38021859, 7.7720913, 8.52102629, 7.69048475, 7.98726332, 8.06523303, 7.74734064, 7.01837241, 7.89803359, 8.70073113, 7.76503948, 8.05351342, 7.22386065, 7.76976782, 7.17659323], "mode": "markers", "marker": {"color": "#12b886", "size": 7, "opacity": 0.8}, "name": "Cluster 1"}, {"type": "scatter", "x": [9.40487926, 8.67028461, 8.68490888, 8.56820617, 9.26813307, 8.58921288, 10.02922413, 9.71135036, 8.06968037, 8.27200317, 7.91291223, 9.39663251, 8.18409341, 8.42895401, 9.28596881, 9.78631698, 8.35558648, 9.28499641, 8.76873352, 9.50358321, 8.64980199, 10.20133189, 8.89124388, 10.34603865, 8.94312171, 8.56998805, 8.08909871, 8.6176526, 9.8484214, 9.30611344, 8.98809746, 8.39494944, 10.13180203, 8.82484509, 9.69327317, 9.04357012, 7.98905271, 8.4383966, 9.07676821, 9.25474465, 9.32199849, 9.25977058, 8.20591249, 8.69809633, 10.2130389, 8.73647584, 9.72528665, 8.6998064, 8.99835309], "y": [0.32379694, -0.54950377, 0.71479233, -0.08938143, -0.34905046, 0.38512236, 0.18751794, -0.46837488, 0.88659018, 0.62688641, 0.7301589, -0.40308784, 0.92289687, -0.1006295, 0.20755535, 0.62850943, 0.13807953, 0.7287545, 0.86400827, 0.02840174, 1.35760308, -0.66630451, 0.8325716, -0.2716906, 0.14767768, -0.31182744, -0.30212011, 0.70151865, 0.10867936, -0.36474129, 0.21541716, 0.23526112, 0.07945874, -0.3848356, 0.34577225, 1.06134237, 1.45636823, 0.61506872, 0.38861725, -0.45738783, 1.38896482, -0.27467057, 0.75772998, 0.08395616, -0.24050672, 0.23092453, 0.82060666, 0.15193608, -0.28945617], "mode": "markers", "marker": {"color": "#fd7e14", "size": 7, "opacity": 0.8}, "name": "Cluster 2"}, {"type": "scatter", "x": [-3.51986895, -0.01908098, 8.98844164], "y": [-1.70494843, 7.81773901, 0.20707911], "mode": "markers", "marker": {"color": "#f03e3e", "size": 14, "symbol": "x", "line": {"width": 3}}, "name": "Centroids"}]}The same data points, now colored according to the cluster assigned by K-Means with $K=3$. The red 'x' markers indicate the final positions of the cluster centroids.InterpretationCompare the K-Means result plot with the initial plot of the raw data. You should see that K-Means has successfully identified the three distinct groups present in our synthetic data. Each color represents a cluster found by the algorithm, and the red 'x' marks show the center (mean position) of all points belonging to that cluster.In this simple case, where the clusters are well-separated and roughly spherical, K-Means performs very well.What if We Chose a Different K?Remember the discussion about choosing $K$? Let's briefly consider what might happen if we instructed K-Means to find, say, $K=2$ clusters in this data. The algorithm would still run, but it would be forced to partition the three visible groups into only two clusters. Typically, it might merge two of the original groups or split one group across the two resulting clusters, depending on the initial centroid placement. Similarly, choosing $K=4$ would force the algorithm to split one or more of the natural groups into smaller, potentially less meaningful clusters. This exercise highlights that while K-Means is effective at partitioning data, the choice of $K$ significantly influences the outcome and its interpretation.SummaryIn this practice section, you've applied the K-Means algorithm to a simple, visually intuitive dataset. You saw how to:Generate synthetic data suitable for clustering practice using make_blobs.Visualize the raw data to understand its structure.Initialize and fit the KMeans model from Scikit-learn, specifying the desired number of clusters ($K$).Extract the cluster assignments (labels_) and centroid locations (cluster_centers_) from the fitted model.Visualize the results by coloring data points according to their assigned cluster and plotting the final centroids."This hands-on example demonstrates the core process of using K-Means for finding groups in unlabeled data. While data is often more complex and higher-dimensional, the fundamental steps remain similar. You now have a practical foundation for understanding how K-Means works and how to implement it using common tools."