Utilizing pandas and Plotly, this Python project assesses Pokemon capture outcomes by considering diverse parameters and Pokeball selections. Through simulations and Plotly-generated graphs, it analyzes factors like Pokemon health, level, and ball efficacy, offering insights into optimal capture strategies.
This project revolves around the creation of a Sokoban puzzle solver through the implementation of various search algorithms, including BFS, DFS, Greedy and A*, while using admissible heuristics to optimize solution strategies.
This project is dedicated to optimizing character configurations for a role-playing game (RPG) using genetic algorithms. Characters belong to four distinct classes: Warriors, Archers, Defenders, and Assassins, each with specific performance objectives for attack and defense.
The genetic algorithm implementation includes various genetic operators (crossover and mutation), selection and replacement strategies, and termination criteria. All algorithm parameters are specified in an external configuration file for flexibility and experimentation.
This project involves implementing simple and multilayer perceptrons to solve various problems. It explores logical operations, classification tasks, and digit recognition using neural networks. The project aims to understand the capabilities of these neural models and their generalization performance.
This project provides a comprehensive exploration of neural networks, from basic single-layer perceptrons to more advanced multilayer architectures. It aims to enhance understanding of neural network capabilities and their application in real-world problem-solving scenarios.
This project focuses on unsupervised learning and comprises various exercises involving Kohonen networks, Principal Component Analysis (PCA), and pattern recognition using the Hopfield model.
The exercises explore clustering countries based on characteristics, computing principal components, and recognizing noisy patterns
This project involved using the previous implementations of Multilayer Perceptron to implement an autoencoder, with a latent variable dimension of size 2, for reproducing 35 pixel letters with at most one pixel off. Then we redefined the architecture to build a variational autoencoder that was used to generate generate new emojis through the continuos distribuition generated by the the decoder section of the VAE. Both models were trained without external libraries beyond numpy, writing ourselves the backpropagation algorithm and the ADAM optimizer.
- 62248 - José Rodolfo Mentasti
- 62510 - Martín Augusto Ippolito
- 62512 - Marco Scilipoti
- 62618 - Axel Facundo Preiti Tasat
- 62500 - Gastón Ariel Francois
- 62103 - Tomás Gay Bare