Building a Simple Neural Network with Python, SQL Server, and Keras: Step-by-Step Guide

In the rapidly evolving field of artificial intelligence, neural networks have become a cornerstone for developing intelligent systems. To facilitate learning and experimentation with these models, I’ve developed a simple neural network using Keras, TensorFlow, Python, and Microsoft SQL Server. This project, featured in the GnoelixiAI Hub newsletter, demonstrates how to train a model on the Iris dataset stored in SQL Server.

 

Project Overview

The project showcases the integration of machine learning frameworks with SQL Server, enabling efficient data handling and model training. The Iris dataset, a classic in machine learning, serves as the training ground for this neural network.

 

Prerequisites

Before diving into the code, ensure you have the following installed:

  • Python: The programming language used for developing the neural network.
  • Keras and TensorFlow: Libraries for building and training neural networks.
  • Microsoft SQL Server: The database system where the Iris dataset is stored.
  • pyodbc: A Python library for connecting to SQL Server.

 

Setting Up the Environment

1. Install Python: Download and install Python from the official website.

2. Install Keras and TensorFlow: Use pip to install these libraries:

pip install keras tensorflow

3. Install pyodbc: Install the pyodbc library to enable Python to connect to SQL Server:

pip install pyodbc

 

Preparing the Iris Dataset in SQL Server

First, create a table in your SQL Server database to store the Iris dataset. You can use the following SQL script:

CREATE TABLE IrisDataset (
    Id INT PRIMARY KEY,
    SepalLengthCm FLOAT,
    SepalWidthCm FLOAT,
    PetalLengthCm FLOAT,
    PetalWidthCm FLOAT,
    Species VARCHAR(50)
);

Next, insert the Iris dataset into this table. You can download the dataset from the UCI Machine Learning Repository and insert the data manually or use a script to automate this process.

 

Building the Neural Network

With the environment set up and the data prepared, you can now build and train the neural network. Below is a Python script that connects to the SQL Server, retrieves the data, and trains a neural network model:

import pyodbc
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder, StandardScaler
from keras.models import Sequential
from keras.layers import Dense
from keras.utils import np_utils

# Connect to SQL Server
conn = pyodbc.connect(
    'DRIVER={ODBC Driver 17 for SQL Server};'
    'SERVER=your_server_name;'
    'DATABASE=your_database_name;'
    'UID=your_username;'
    'PWD=your_password'
)

# Retrieve data
query = "SELECT SepalLengthCm, SepalWidthCm, PetalLengthCm, PetalWidthCm, Species FROM IrisDataset"
df = pd.read_sql(query, conn)

# Preprocess data
X = df.drop('Species', axis=1).values
y = df['Species'].values

# Encode class values as integers
encoder = LabelEncoder()
encoded_y = encoder.fit_transform(y)
# Convert integers to one-hot encoding
dummy_y = np_utils.to_categorical(encoded_y)

# Standardize features
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)

# Split data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X_scaled, dummy_y, test_size=0.2, random_state=42)

# Define the model
model = Sequential()
model.add(Dense(8, input_dim=4, activation='relu'))
model.add(Dense(3, activation='softmax'))

# Compile the model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

# Train the model
model.fit(X_train, y_train, epochs=50, batch_size=5, verbose=1)

# Evaluate the model
loss, accuracy = model.evaluate(X_test, y_test)
print(f'Accuracy: {accuracy * 100:.2f}%')

Explanation of the Code:

  • Data Retrieval: The script connects to the SQL Server database and retrieves the Iris dataset using a SQL query.
  • Data Preprocessing:
    • Features (X) and labels (y) are separated.
    • Labels are encoded into integers and then converted to one-hot encoding.
    • Features are standardized to have a mean of 0 and a standard deviation of 1.
    • The dataset is split into training and testing sets.
  • Model Definition: A sequential neural network model is defined with:
    • An input layer with 8 neurons and ReLU activation.
    • An output layer with 3 neurons (corresponding to the three classes) and softmax activation.
  • Model Compilation: The model is compiled using categorical cross-entropy as the loss function and Adam as the optimizer.
  • Model Training: The model is trained for 50 epochs with a batch size of 5.
  • Model Evaluation: The model’s accuracy is evaluated on the test set.

 

Conclusion

This project demonstrates the seamless integration of machine learning models with SQL Server databases, enabling efficient data handling and model training. By leveraging Keras and TensorFlow, you can build and train neural networks on data stored in SQL Server, facilitating the development of intelligent applications.

For more details and to access the complete code, visit the GnoelixiAI Simple Neural Network repository.

Happy coding!

 

Read Also:

 

Subscribe to the GnoelixiAI Hub newsletter on LinkedIn and stay up to date with the latest AI news and trends.

Subscribe to my YouTube channel.

 

Reference: aartemiou.com (https://www.aartemiou.com)
© Artemakis Artemiou

Rate this article: 1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 5.00 out of 5)

Loading...