Skip to content

Commit

Permalink
Merge branch 'king04aman:main' into Word_Frequency_Counter
Browse files Browse the repository at this point in the history
  • Loading branch information
rkt-1597 authored Oct 24, 2024
2 parents a0d55d6 + 8c6ae32 commit 878a851
Show file tree
Hide file tree
Showing 45 changed files with 2,747 additions and 72 deletions.
15 changes: 0 additions & 15 deletions .github/workflows/contributors.yml

This file was deleted.

84 changes: 62 additions & 22 deletions .github/workflows/welcome.yml
Original file line number Diff line number Diff line change
@@ -1,40 +1,80 @@
name: New Contributor Welcome
name: Welcome Comments

permissions:
actions: write
attestations: write
checks: write
contents: write
deployments: write
id-token: write
issues: write
discussions: write
packages: write
pages: write
pull-requests: write
repository-projects: write
security-events: write
statuses: write

on:
pull_request:
types: [opened, closed]
issues:
types: [opened]
types: [opened, closed]
pull_request_target:
types: [opened, closed]

jobs:
greet_new_contributor:
welcomer:
runs-on: ubuntu-latest
steps:
- uses: bubkoo/welcome-action@v1
- name: Auto Welcome on Issues or PRs
uses: actions/github-script@v6
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
FIRST_ISSUE_REACTIONS: "+1, hooray, rocket, heart"
FIRST_ISSUE: >
👋 Greetings @{{ author }}!
We're thrilled to see you opening your first issue! Your input is invaluable to us. Don’t forget to adhere to our issue template for the best experience.
FIRST_PR: >
👋 Welcome aboard, @{{ author }}!
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
const author = context.payload.sender.login;
const commentBody = (message) => `👋 @${author} 👋\n\n${message}`;
We're delighted to have your first pull request! Please take a moment to check our contributing guidelines to ensure a smooth process.
if (context.eventName === 'issues') {
const issue = context.payload.issue;
FIRST_PR_MERGED: >
🎉 Kudos @{{ author }}!
if (context.payload.action === 'opened') {
const message = `We're thrilled to see you opening an issue! Your input is valuable to us. Don’t forget to fill out our issue template for the best experience. We will look into it soon.`;
github.rest.issues.createComment({
issue_number: issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: commentBody(message),
});
} else if (context.payload.action === 'closed') {
const message = `Thanks for closing the issue! We appreciate your updates.`;
github.rest.issues.createComment({
issue_number: issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: commentBody(message),
});
}
} else if (context.eventName === 'pull_request_target') {
const pr = context.payload.pull_request;
You've just merged your first pull request! We're excited to have you in our community. Keep up the fantastic contributions!
STAR_MESSAGE: If you enjoy this project, please consider ⭐ starring ⭐ this repository!
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
if (context.payload.action === 'opened') {
const message = `We're delighted to have your pull request! Please take a moment to check our contributing guidelines and ensure you've filled out the PR template for a smooth process. We will review it soon.`;
github.rest.issues.createComment({
issue_number: pr.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: commentBody(message),
});
} else if (context.payload.action === 'closed') {
const message = pr.merged
? `🎉 You've just merged your pull request! We're excited to have you in our community. Keep up the fantastic contributions to the project!`
: `Thanks for closing the pull request! Your contributions are valuable to us.`;
github.rest.issues.createComment({
issue_number: pr.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: commentBody(message),
});
}
}
30 changes: 0 additions & 30 deletions CONTRIBUTORS.svg

This file was deleted.

125 changes: 125 additions & 0 deletions Mental Health chatbot/Mental_health_bot.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
import pandas as pd
import numpy as np
import json
import nltk
from nltk.stem import WordNetLemmatizer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.optimizers import Adam
import os

# Download necessary NLTK data
nltk.download('punkt')
nltk.download('wordnet')

# Initialize lemmatizer
lemmatizer = WordNetLemmatizer()

# Load datasets
try:
print("Files in current directory:", os.listdir())
csv_files = [f for f in os.listdir() if f.endswith('.csv') and 'Combined' in f and 'Data' in f]
if csv_files:
df = pd.read_csv(csv_files[0])
print(f"Loaded file: {csv_files[0]}")
print("DataFrame columns:", df.columns.tolist())
print("DataFrame shape:", df.shape)
print(df.head())
else:
raise FileNotFoundError("No matching CSV file found")
except FileNotFoundError as e:
print(f"Error: {str(e)}")
print("Ensure 'Combined_Data.csv' is in the same directory.")
print("Current working directory:", os.getcwd())
exit(1)

# Load intents.json
try:
with open('intents.json', 'r') as f:
intents = json.load(f)
except FileNotFoundError as e:
print(f"Error: {str(e)}. Ensure 'intents.json' is available.")
exit(1)

# Preprocess text data
def preprocess_text(text):
tokens = nltk.word_tokenize(str(text).lower())
return ' '.join([lemmatizer.lemmatize(token) for token in tokens])

# Determine the correct column name for text data
text_column = 'statement' if 'statement' in df.columns else df.columns[0]

df['processed_text'] = df[text_column].apply(preprocess_text)

# Create TF-IDF vectorizer
vectorizer = TfidfVectorizer()
tfidf_matrix = vectorizer.fit_transform(df['processed_text'])

# Prepare data for model training
X, y = [], []

for intent in intents['intents']:
for pattern in intent['patterns']:
X.append(preprocess_text(pattern))
y.append(intent['tag'])

X = vectorizer.transform(X).toarray()
y = pd.get_dummies(y).values

# Ensure X and y have the same shape
assert X.shape[0] == y.shape[0], f"Shape mismatch: X={X.shape}, y={y.shape}"

# Build the model
model = Sequential([
Dense(128, input_shape=(X.shape[1],), activation='relu'),
Dropout(0.5),
Dense(64, activation='relu'),
Dropout(0.5),
Dense(y.shape[1], activation='softmax')
])

model.compile(optimizer=Adam(learning_rate=0.01), loss='categorical_crossentropy', metrics=['accuracy'])

# Train the model with error handling
try:
model.fit(X, y, epochs=200, batch_size=32, verbose=1)
except Exception as e:
print(f"Error during model training: {str(e)}")
print(f"X shape: {X.shape}, y shape: {y.shape}")
raise

# Chatbot function
def chatbot_response(user_input):
processed_input = preprocess_text(user_input)
input_vector = vectorizer.transform([processed_input]).toarray()

# Find similar responses from the dataset
similarities = cosine_similarity(input_vector, tfidf_matrix)
most_similar_idx = similarities.argmax()
response = df.iloc[most_similar_idx][text_column]

# Classify intent
intent_probs = model.predict(input_vector)[0]
intent_idx = intent_probs.argmax()
intent_tag = list(pd.get_dummies([i['tag'] for i in intents['intents']]).columns)[intent_idx]

# Get response from intents
for intent in intents['intents']:
if intent['tag'] == intent_tag:
intent_response = np.random.choice(intent['responses'])
break

return f"Dataset Response: {response}\n\nIntent Response: {intent_response}"

# Main chat loop
print("Mental Health Chatbot: Hello! I'm here to provide support and resources for mental health. How can I help you today?")
while True:
user_input = input("You: ")
if user_input.lower() in ['quit', 'exit', 'bye']:
print("Mental Health Chatbot: Take care! Remember, it's okay to seek help when you need it.")
break
response = chatbot_response(user_input)
print("Mental Health Chatbot:", response)

70 changes: 70 additions & 0 deletions Mental Health chatbot/README_BOT.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
# Mental Health Chatbot

This project implements a chatbot designed to provide support and information related to mental health. It uses natural language processing and machine learning techniques to understand user input and generate appropriate responses.

## Features

- Sentiment analysis of user input
- Personalized responses based on user's emotional state
- Information provision on various mental health topics
- Supportive and empathetic conversation

## Installation

1. Clone this repository:
```
git clone https://github.com/Ashutoshdas-dev/All-In-One-Python-Projects
cd All-In-One-Python-Projects/Mental Health Chatbot
```

2. Set up a virtual environment:
```
python -m venv venv
```

3. Activate the virtual environment:
- On Windows:
```
venv\Scripts\activate
```
- On macOS and Linux:
```
source venv/bin/activate
```

4. Install the required dependencies:
```
pip install -r requirements.txt
```

## Usage

1. Ensure your virtual environment is activated (see step 3 in Installation).

2. Run the main script:
```
python mental_health_chatbot.py
```

3. Follow the prompts to interact with the chatbot.

4. Type 'quit' or 'exit' to end the conversation.

5. When you're done, deactivate the virtual environment:
```
deactivate
```

## Datasets

This project utilizes the following datasets for training and improving the chatbot's responses:

1. [Sentiment Analysis for Mental Health](https://www.kaggle.com/datasets/suchintikasarkar/sentiment-analysis-for-mental-health)
- Description: This dataset contains text data labeled with sentiment for mental health-related content.
- Use: Helps in training the sentiment analysis component of the chatbot.

2. [Mental Health Conversational Data](https://www.kaggle.com/datasets/elvis23/mental-health-conversational-data)
- Description: This dataset includes conversations related to mental health topics.
- Use: Aids in training the chatbot to generate more natural and context-appropriate responses.


5 changes: 5 additions & 0 deletions Mental Health chatbot/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pandas
numpy
nltk
scikit-learn
tensorflow
34 changes: 34 additions & 0 deletions My-Personal-Journal/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# Personal Journal

A simple web-based application for maintaining a personal journal. Users can create entries, tag them, and search through past entries by date or keywords. The application is built using Flask and SQLite for a lightweight and efficient experience.

## Features

- **Add New Entries**: Users can add journal entries with mood, content, and tags.
- **Search Entries**: Search through entries using keywords or specific dates.
- **Tag Management**: Create and view tags associated with each entry, and filter entries by tags.
- **User-Friendly Interface**: A clean and professional UI for easy navigation and use.

## Technologies Used

- Python
- Flask
- SQLite
- HTML/CSS

## Installation

1. **Clone the Repository**:
```bash
git clone <repository-url>
cd your_project

2. **Install Required Packages: Make sure you have Python installed (preferably Python 3). Install the required packages using pip**:
```bash
pip install -r requirements.txt
3. **Run the Application: Start the Flask application**:
```bash
python app.py

4. **Access the App: Open your web browser and navigate to http://127.0.0.1:8080 to access the application.**
Loading

0 comments on commit 878a851

Please sign in to comment.