×
Samples Blogs Make Payment About Us Reviews 4.8/5 Order Now

Stroke Risk Prediction: Exploratory Data Analysis and Model Evaluation

September 05, 2023
Samantha Barker
Samantha Barker
🇬🇧 United Kingdom
Data Analysis
Samantha Barker, a data analysis expert with 10+ years experience, holds a master's from Anderson University. She specializes in guiding students to complete their statistical assignments effectively.
Key Topics
  • Assignment Problem Description: Predicting Stroke Risk
    • Exploratory Data Analysis
    • BMI Analysis
    • Average Glucose Level Comparison
    • Hypertension and Stroke
    • Smoking Status and Stroke
    • Analysis and Evaluation
    • Linear Regression
    • Decision Tree
    • Neural Network
    • Appendix

In this comprehensive analysis, we delve into the critical task of predicting stroke risk by examining the association between various covariates and stroke occurrence. Through exploratory data analysis, we investigate key factors such as BMI, average glucose levels, hypertension, and smoking status to uncover their significance. We employ three distinct models—Linear Regression, Decision Tree, and a Neural Network—to assess their predictive capabilities. The results showcase the effectiveness of these models, offering valuable insights into stroke risk assessment and highlighting the importance of data-driven healthcare decisions.

Assignment Problem Description: Predicting Stroke Risk

In this data analysis assignment, we aim to predict whether a person is at risk of experiencing a stroke based on various covariates. We employ three different models: Linear Regression, Random Forest, and a Convolutional Neural Network. The dataset is split into two groups in a 4:1 ratio, and the accuracy of each model is assessed by training them on the training dataset and evaluating their performance on the test dataset.

Exploratory Data Analysis

BMI Analysis

We began our exploratory data analysis by plotting the BMI for individuals who have experienced a stroke and those who have not. The boxplot revealed no significant difference between the two groups.

Boxplot showing the BMI of individuals who experienced a stroke and those who didn’t
  • Boxplot showing the BMI of individuals who experienced a stroke and those who didn’t

Average Glucose Level Comparison

Next, we compared the average glucose levels between the two groups. While the median levels were similar, there was a notable difference in variance between the groups, indicating a potential association between stroke and blood sugar levels.

Boxplot comparing the avg. glucose level between the two groups
  • Boxplot comparing the avg. glucose level between the two groups

Hypertension and Stroke

We analyzed the relationship between hypertension and stroke by creating a contingency table and conducting a chi-squared test. The results showed a significant association between having hypertension and experiencing a stroke.

Yes(Hypertension)No
Yes(Stroke)66183
No4324429

Table 1: Association between hypertension and stroke

Smoking Status and Stroke

Similarly, we assessed the relationship between smoking status and stroke using a contingency table and a chi-squared test. The test statistics revealed a statistically significant relationship between smoking and experiencing a stroke.

No(Stroke)Yes
formerly smoked81570
Never smoked180290
smokes74742
unknown149747

Table 2: Relationship between smoking status and having a stroke

Analysis and Evaluation

After conducting exploratory data analysis, we proceeded to build and evaluate predictive models.

Linear Regression

We trained a linear regression model and identified that age, hypertension, heart disease, and average glucose level were significant variables based on the p-values of the coefficients. The adjusted R-squared value indicated that only 7% of the outcome's variation was explained linearly by the covariates. Testing the model on the test dataset resulted in an accuracy of 0.83.

oep. va riable:strokeR-squared:0.076
Model:OLSAdj. R-squared :0.072
Method:Least SquaresF-statistic:20.08
Da te :Sun, 19 Har 2923Prob (F-statistic):3.07e-S6
Time '92:18:4&Log-Likelihood99,626
No. Observations:Of Residuals:3927AIC:-199
Of Mode\: Covariance Type:3916BIC:-1852 .
coefstd errtP>|t[0.0250.975]
const-0.09290.019-4.7690.000-0.131-0.055
age0.00250.00010.8310.0000.0020.003
hypertension0.03920.0113.5620.0000.0180.061
heart_disease0.04640.0153.1400.0020.0170.075
avg_glucose_level0.00047.16e-055.0850.0000.0000.001
bmi-0.00060.000-1.3480.178-0.0010.000
gender_ Male0.00170.0060.2770.782-0.0100.014
gender_Other-0.02690.189-0.1430.887-0.3970.343
ever_married_Yes-0.02450.009-2.7310.006-0.042-0.007
work_type_Never_worked0.02900.0450.6480.517-0.0590.117
work_type_Private0.00990.0091.0730.283-0.0080.028
work_type_Self-employed-0.01400.011-1.2230.221-0.0360.008
work_type_children0.05280.0163.3520.0010.0220.084
Residence_type_Urban0.00350.0060.5810.561-0.0080.015
smoking_status_formerly: smoked0.00720.0100.7110.477-0.0130.027
smoking_status_never smoked0.00190.0080.2240.823-0.0140.018
smoking_status_smoke0.00990.0100.9540.340-0.0100.030
Omnibus:3307.510Durbin-Watson:2.020
Prob(Omnibus):0.000Jarque-Bera (JB):60531.132
Skew:4.174Prob(JB):0.00
Kurtosis:20.328Cond. No.7.85e+03
  • Linear Regression Model in Python to Train the Dataset

Decision Tree

We utilized a decision tree model to predict stroke risk. After fitting the model on the test dataset, we achieved an accuracy of 0.91, which outperformed the linear regression model. The regression diagnostics are show by the plot

Regression-Diagnostics
  • Regression Diagnostics

Neural Network

A neural network model with two hidden layers and 64 cells was employed, and it was trained for 100 epochs. The model achieved a Mean Absolute Error (MAE) of 0.11 and an accuracy of 0.89, falling between the decision tree and linear regression models in terms of performance.

Appendix

import numpy as np
import os
import pandas as pd
from sklearn.model_selection import train_test_split
import statsmodels.api as sm
from sklearn.linear_model import LinearRegression
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense
import tensorflow as tf
from tensorflow import keras
df=pd.read_csv("healthcare-dataset-stroke-data.csv")
df.columns
df=df.drop('id',axis=1)
df.columns
df=df.dropna()
X=df[['gender', 'age', 'hypertension', 'heart_disease', 'ever_married',
'work_type', 'Residence_type', 'avg_glucose_level', 'bmi',
'smoking_status']]
y=df['stroke']
lm = LinearRegression()
X_train = pd.get_dummies(data=X_train, drop_first=True)
X_test = pd.get_dummies(data=X_test, drop_first=True)
X_test['gender_Other']=[0]*982
new_order=['age', 'hypertension', 'heart_disease', 'avg_glucose_level', 'bmi',
'gender_Male', 'gender_Other', 'ever_married_Yes',
'work_type_Never_worked', 'work_type_Private',
'work_type_Self-employed', 'work_type_children', 'Residence_type_Urban',
'smoking_status_formerly smoked', 'smoking_status_never smoked',
'smoking_status_smokes']
X_test=X_test[new_order]
X_train1 = sm.add_constant(X_train) # add intercept term
model = sm.OLS(y_train, X_train1)
results = model.fit()
print(results.summary())
lm.fit(X_train, y_train)
lm.fit(X_train, y_train)
y1=lm.predict(X_test)
for i in range(982):
if y1[i]>.1:
y1[i]=1
else:
y1[i]=0
sum(m)/982
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
mean = X_train.mean(axis=0)
std = X_train.std(axis=0)
X_train = (X_train - mean) / std
X_test = (X_test - mean) / std
model = keras.Sequential([
keras.layers.Dense(64, activation='relu', input_shape=(X_train.shape[1],)),
keras.layers.Dense(64, activation='relu'),
keras.layers.Dense(1)
])
model.compile(optimizer='adam', loss='mse', metrics=['mae'])
history = model.fit(X_train, y_train, epochs=100, validation_split=0.2)
test_loss, test_mae = model.evaluate(X_test, y_test)
print('Test MAE:', test_mae) 

You Might Also Like

Explore our diverse sample collection for insightful assignments covering fundamental concepts to advanced methodologies. Dive into a wealth of topics and solutions tailored to your needs. Unlock valuable knowledge and inspiration through our curated samples.