-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.html
423 lines (363 loc) · 33.3 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
---
title: Pascal Sager - Research and Projects
layout: portfolio
id: index
---
<div id="particles">
<div id="intro">
<div id="lead-content">
<h1>Pascal Sager</h1>
<hr style="width: 100%; background-color: #ffffff; border-color: #ffffff;">
<h2 id="myJobs"></h2>
</div>
<div id="lead-overlay"></div>
<div id="lead-down">
<span>
<div>
<i class="fa fa-circle-chevron-down fa-3x" aria-hidden="true"></i>
</div>
</span>
</div>
</div>
</div>
{% include about-me.html %}
{% include publication-details.html modalId="dnaModal" modalLabel="dnaModalLabel" title="The Dynamic Net Architecture: Learning Robust and Holistic Visual Representations Through Self-Organizing Networks"
url="https://doi.org/10.48550/arXiv.2407.05650" imagePath="/assets/images/dna.png"
abstract="We present a novel intelligent-system architecture called 'Dynamic Net Architecture' (DNA) that relies on recurrence-stabilized networks and discuss it in application to vision. Our architecture models a (cerebral cortical) area wherein elementary feature neurons encode details of visual structures, and coherent nets of such neurons model holistic object structures. By interpreting smaller or larger coherent pieces of an area network as complex features, our model encodes hierarchical feature representations essentially different than artificial neural networks (ANNs). DNA models operate on a dynamic connectionism principle, wherein neural activations stemming from initial afferent signals undergo stabilization through a self-organizing mechanism facilitated by Hebbian plasticity alongside periodically tightening inhibition. In contrast to ANNs, which rely on feed-forward connections and backpropagation of error, we posit that this processing paradigm leads to highly robust representations, as by employing dynamic lateral connections, irrelevant details in neural activations are filtered out, freeing further processing steps from distracting noise and premature decisions. We empirically demonstrate the viability of the DNA by composing line fragments into longer lines and show that the construction of nets representing lines remains robust even with the introduction of up to 59% noise at each spatial location. Furthermore, we demonstrate the model's capability to reconstruct anticipated features from partially obscured inputs and that it can generalize to patterns not observed during training. In this work, we limit the DNA to one cortical area and focus on its internals while providing insights into a standalone area's strengths and shortcomings. Additionally, we provide an outlook on how future work can implement invariant object recognition by combining multiple areas."
%}
{% include publication-details.html modalId="consensusModal" modalLabel="consensusModalLabel" title="Consensus Task Interaction Trace Recommender to Guide Developers’ Software Navigation"
url="??" imagePath="/assets/images/consensus.png"
abstract="Developers must complete change tasks on large software systems for maintenance and development purposes. Having a custom software system with numerous instances that meet the growing client demand for features and functionalities increases the software complexity. Developers, especially newcomers, must spend a significant amount of time navigating through the source code and switching back and forth between files in order to understand such a system and find the parts relevant for performing current tasks. This navigation can be difficult, time-consuming and affect developers' productivity. To help guide developers' navigation towards successfully resolving tasks with minimal time and effort, we present a task-based recommendation approach that exploits aggregated developers' interaction traces. Our novel approach, Consensus Task Interaction Trace Recommender (CITR), recommends file(s)-to-edit that help perform a set of tasks based on a tasks-related set of interaction traces obtained from developers who performed similar change tasks on the same or different custom instances of the same system. Our approach uses a consensus algorithm, which takes as input task-related interaction traces and recommends a consensus task interaction trace that developers can use to complete given similar change tasks that require editing (a) common file(s). To evaluate the efficiency of our approach, we perform three different evaluations. The first evaluation measures the accuracy of CITR recommendations. In the second evaluation, we assess to what extent CITR can help developers by conducting an observational controlled experiment in which two groups of developers performed evaluation tasks with and without the recommendations of CITR. In the third and last evaluation, we compare CITR to a state-of-the-art recommendation approach, MI. Results report with statistical significance that CITR can correctly recommend on average 73% of the files to be edited. Furthermore, they show that CITR can increase developers' successful task completion rate. CITR outperforms MI by an average of 31% higher recommendation accuracy."
%}
{% include publication-details.html modalId="gptModal" modalLabel="gptModalLabel" title="So You Want Your Private LLM at Home? A Survey and Benchmark of Methods for Efficient GPTs"
url="https://doi.org/10.21256/zhaw-30279" imagePath="/assets/images/gpt.png"
abstract="At least since the introduction of ChatGPT, the abilities of generative large language models (LLMs), sometimes called GPTs, are at the center of the attention of AI researchers, entrepreneurs, and others. However, for many applications, it is not possible to call an existing LLM service via an API due to data protection concerns or when no task-appropriate LLM exists. On the other hand, deploying or training a private LLM is often prohibitively computationally expensive. In this paper, we give an overview of the most important recent methodologies that help reduce the computational footprint of LLMs. We further present extensive benchmarks for seven methods from two of the most important areas of recent progress: model quantization and low-rank adapters, showcasing how it is possible to leverage state-of-the-art LLMs with limited resources. Our benchmarks include resource consumption metrics (e.g. GPU memory usage), a state-of-the-art quantitative performance evaluation as well as a qualitative performance study conducted by eight individual human raters. Our evaluations show that quantization has a profound effect on GPU memory requirements. However, we also show that these quantization methods, contrary to how they are advertised, cause a noticeable loss in text quality. We further show that low-rank adapters allow effective model fine-tuning with moderate compute resources. For methods that require less than 16 GB of GPU memory, we provide easy-to-use Jupyter notebooks that allow anyone to deploy and fine-tune state-of-the- art LLMs on the Google Colab free tier within minutes without any prior experience or infrastructure."
%}
{% include publication-details.html modalId="morModal" modalLabel="morModalLabel" title="Real World Music Object Recognition"
url="https://transactions.ismir.net/articles/10.5334/tismir.157" imagePath="/assets/images/mor.png"
abstract="We present solutions to two of the most pressing issues in contemporary optical
music recognition (OMR). We improve recognition accuracy on low-quality, real-
world (i.e. containing ageing, lighting, or dirt artefacts among others) input data
and provide confidence-rated model outputs to enable efficient human post-
processing. Specifically, we present (i) a sophisticated input augmentation scheme
that can reduce the gap between sanitised benchmarks and realistic tasks through
a combination of synthetic data and noisy perturbations of real-world documents;
(ii) an adversarial discriminative domain adaptation method that can be employed
to improve the performance of OMR systems on low-quality data; (iii) a combination
of model ensembles and prediction fusion, which generates trustworthy confidence
ratings for each prediction. We evaluate our contributions on a newly created test
set consisting of manually annotated pages of varying real-world quality, sourced
from the International Music Score Library Project (IMSLP)/Petrucci Music Library.
With the presented data augmentation scheme, we achieve a doubling in detection
performance from 36.0% to 73.3% on noisy real-world data compared to state-of-
the-art training. This result is then combined with robust confidence ratings paving
the way for OMR to be deployed in the real world. Additionally, we show the merits
of unsupervised adversarial domain adaptation for OMR raising the 36.0% baseline to
48.9%."%}
{% include publication-details.html modalId="udaModal" modalLabel="udaModalLabel" title="Unsupervised Domain Adaptation for Vertebrae Detection and Identification in 3D CT Volumes Using a Domain Sanity Loss"
url="https://www.mdpi.com/2313-433X/8/8/222" imagePath="/assets/images/uda.png" abstract="A variety of medical computer vision applications analyze 2D slices of computed tomography (CT)
scans, whereas axial slices from the body trunk region are usually identified based on their
relative position to the spine. A limitation of such systems is that either the correct slices
must be extracted manually or labels of the vertebrae are required for each CT scan to develop
an automated extraction system. In this paper, we propose an unsupervised domain adaptation
(UDA) approach for vertebrae detection and identification based on a novel Domain Sanity Loss
(DSL) function. With UDA the model’s knowledge learned on a publicly available (source) data set
can be transferred to the target domain without using target labels, where the target domain is
defined by the specific setup (CT modality, study protocols, applied pre- and processing) at the
point of use (e.g., a specific clinic with its specific CT study protocols). With our approach,
a model is trained on the source and target data set in parallel. The model optimizes a
supervised loss for labeled samples from the source domain and the DSL loss function based on
domain-specific “sanity checks” for samples from the unlabeled target domain. Without using
labels from the target domain, we are able to identify vertebra centroids with an accuracy of
72.8%. By adding only ten target labels during training the accuracy increases to 89.2%, which
is on par with the current state-of-the-art for full supervised learning, while using about 20
times less labels. Thus, our model can be used to extract 2D slices from 3D CT scans on
arbitrary data sets fully automatically without requiring an extensive labeling effort,
contributing to the clinical adoption of medical imaging by hospitals."%}
{% include publication-details.html modalId="sdsModal" modalLabel="sdsModalLabel" title="A Survey of Un-, Weakly-, and Semi-Supervised Learning
Methods for Noisy, Missing and Partial Labels in Industrial Vision Applications"
url="https://ieeexplore.ieee.org/document/9474624" imagePath="/assets/images/sds.png" abstract="When applying deep learning methods in an industrial vision application, they often fall short
of the performance shown in a clean and controlled lab environment due to data quality issues.
Few would consider the actual labels as a driving factor, yet inaccurate label data can impair
model performance significantly. However, being able to mitigate inaccurate or incomplete labels
might also be a cost-saver for real-world projects. Here, we survey state-of-the-art deep
learning approaches to resolve such missing labels, noisy labels, and partially labeled data in
the prospect of an industrial vision application. We systematically present un-, weakly, and
semi-supervised approaches from 'A' like anomaly detection to 'Z' like zero-shot classification
to resolve these challenges by embracing them."%}
<div id="publications">
<h2 class="heading reveal2">Publications</h2>
<div>
<div id="carouselExampleIndicators" class="carousel slide" data-ride="carousel">
<ol class="carousel-indicators">
<li data-target="#carouselExampleIndicators" data-slide-to="0" class="active"></li>
<li data-target="#carouselExampleIndicators" data-slide-to="1"></li>
</ol>
<div class="carousel-inner" style="padding: 10px">
<div class="container">
<div class="carousel-item active">
{% include publication.html modalName="#dnaModal" title="The Dynamic Net Architecture: Learning
Robust and Holistic Visual Representations Through Self-Organizing Networks"
title_short="The Dynamic Net Architecture"
author="P. Sager" useEtAl=true journal="preprint arXiv:2407.05650"
year="2024" pages=""
doi_link="https://doi.org/10.48550/arXiv.2407.05650" doi="10.48550/arXiv.2407.05650" %}
{% include publication.html modalName="#consensusModal" title="Consensus Task Interaction Trace
Recommender to Guide Developers’ Software Navigation"
title_short="Consensus Task Interaction Trace Recommender to Guide Developers’ Software
Navigation"
author="L. Etaiwi, P. Sager" useEtAl=true journal="Empirical Software Engineering"
year="2024" pages=""
doi_link="" doi="??" %}
{% include publication.html modalName="#gptModal" title="So You Want Your Private LLM at
Home? A Survey and Benchmark of Methods for Efficient GPTs"
title_short="So You Want Your Private LLM at
Home? A Survey and Benchmark of Methods for Efficient GPTs"
author="L. Tuggener, P. Sager" useEtAl=true journal="11th IEEE Swiss Conference on Data
Science (SDS)"
year="2024" pages=""
doi_link="https://doi.org/10.21256/zhaw-30279"
doi="10.21256/zhaw-30279" %}
</div>
<div class="carousel-item">
{% include publication.html modalName="#morModal" title="Real World Music Object
Recognition"
title_short="Real World Music Object Recognition"
author="L. Tuggener, R. Emberger, A. Ghosh, P. Sager" useEtAl=true journal="Transactions of
the
International
Society for Music Information Retrieval" year="2024" pages="pp. 1-14"
doi_link="https://transactions.ismir.net/articles/10.5334/tismir.157/"
doi="10.5334/tismir.157" %}
{% include publication.html modalName="#udaModal" title="Unsupervised Domain Adaptation for
Vertebrae Detection and Identification in 3D CT Volumes Using a Domain Sanity Loss"
title_short="Unsupervised Domain Adaptation for Vertebrae Detection
and Identification in 3D CT Volumes Using a Domain Sanity Loss"
author="P. Sager" useEtAl=true
journal="Journal
of Imaging" year="2022" pages="8(8):222" doi_link="https://www.mdpi.com/2313-433X/8/8/222"
doi="10.3390/jimaging8080222" %}
{% include publication.html modalName="#sdsModal" title="A Survey of Un-, Weakly-, and
Semi-Supervised Learning Methods for Noisy, Missing and Partial Labels in Industrial Vision
Applications"
title_short="A Survey of Un-, Weakly-, and
Semi-Supervised Learning Methods for Noisy, Missing and Partial Labels"
author="N. Simmler, P. Sager"
useEtAl=true journal="8th Swiss Conference on Data
Science (SDS)" year="2021" pages="pp. 26-31"
doi_link="https://ieeexplore.ieee.org/document/9474624"
doi="10.1109/SDS51136.2021.00012" %}
</div>
</div>
</div>
<a class="carousel-control-prev" href="#carouselExampleIndicators" role="button"
data-slide="prev">
<i class="fa-solid fa-chevron-left fa-2xl"></i>
</a>
<a class="carousel-control-next" href="#carouselExampleIndicators" role="button"
data-slide="next">
<i class="fa-solid fa-chevron-right fa-2xl"></i>
</a>
</div>
</div>
</div>
<div id="experience" class="background-alt">
<h2 class="heading reveal2">Experience</h2>
<div id="experience-timeline" class="reveal">
<div data-date="February 2024 – Today">
{% include experience-detail.html
employer="Institute of Neuroinformatics, University of Zurich and ETH Zurich"
jobtitle="Visiting Researcher"
description="I am a visiting PhD student in the 'Neural Learning and Intelligent System' group of Prof. Dr.
Benjamin Grewe. I focus on biologically inspired algorithms to build actionable object representations."
%}
</div>
<div data-date="September 2020 – Today" class="reveal">
<h3>Center for AI, Zurich University of Applied Sciences</h3>
<h4>PhD Student, Head of AI Demonstrators, Board Member of Sustainable Impact Program</h4>
<p>
I started at the Center for AI as research assistant in computer vision. After one and a half year, I
became head of GPU infrastructure and demonstrators. In October 2023, I started my PhD in the field of
biologically inspired learning algorithms and stepped down as head of GPU infrastructure to focus more
on my PhD research and leading the demo projects. Besides these activities, I am also a board member of
the University's Sustainable Impact Program, shaping various sustainability initiatives.
</p>
</div>
<div data-date="August 2023 – Today" class="reveal">
<h3>AlpineAI AG</h3>
<h4>Senior Data Scientist</h4>
<p>
AlpineAI is a Swiss company that pioneers the utilization of Language Model (LLM) technology to create
exceptional value for businesses, prioritizing the utmost data security and privacy. I work on the
research and development of SwissGPT and on assistive agents.
</p>
</div>
<div data-date="August 2013 – September 2020" class="reveal">
<h3>Various Employers</h3>
<h4>Hardware and Software Engineer</h4>
<p>
Prior to working as data scientist, I worked as hardware and software engineer at various companies.
I have experience in developing embedded systems, IoT, and full stack software engineering.
More details can be found on my LinkedIn profile.
</p>
</div>
</div>
</div>
<div class="proj-modals">
{% include project-details.html modalId="gibModal" modalLabel="gibModalLabel" imagePath="/assets/images/GIB.png"
title="AIMS: AI Infrastructure Manager for Sustainability" description="Dedicated to fostering a culture of
sustainability, I initiated and lead the project AIMS - AI Infrastructure Manager for Sustainability. This
initiative, designed to address the environmental impact of our university's AI operations, embodies a holistic
approach. We're implementing an energy consumption monitoring system, introducing an efficient job queue system
through Slurm, and look for ways to ingeniously repurposing dissipated heat. These measures aren't merely
elevating standards within the Centre for Artificial Intelligence (CAI); they're sparking a broader
transformation across the entire ZHAW community, as we collectively champion responsible resource utilization
and eco-conscious practices." showLink=false url=""
%}
{% include project-details.html modalId="mlbcaModal" modalLabel="mlbcaModalLabel"
imagePath="/assets/images/ml-bca.jpeg" title="ML-BCA: Machine Learning für Body Composition Analysis"
description="In my second project thesis during my master's program, I laid the foundations for machine
learning-driven body composition analysis at the Cantonal Hospital of Aarau (KSA). This thesis not only lead to a
publication but also to a funded research project. The ML-BCA project is bringing the findings of my project thesis
to life as a practical product for KSA. Our primary objective in this endeavor is to develop a robust application
for facilitating medical validation and prospective studies, and to disseminate our collective scientific
contributions to the broader community." showLink=true
url="https://www.zhaw.ch/en/research/research-database/project-detailview/projektid/6501/" %}
{% include project-details.html modalId="mscModal" modalLabel="mscModalLabel"
imagePath="/assets/images/msc_thesis.png" title="Self-Organisation in a Biologically Inspired Learning Framework
Based on Bernoulli Neurons" description="In my Master's thesis, I introduce an innovative learning framework by
combining insights from neuroscience. While deep learning excels in automated image analysis, it faces issues like
noise sensitivity, limited object recognition adaptability, and a constant need for extensive training data. In
contrast, the human brain excels in holistic, non-linear image feature processing through self-organization and
local learning. This thesis pioneers an image-processing approach inspired by the brain's operations. Empirical
results, including Hebbian-trained lateral connections, demonstrate remarkable robustness and the ability to recover
occluded objects." showLink=true url="https://sagerpascal.github.io/lateral-connections/index.html" %}
{% include project-details.html modalId="dogModal" modalLabel="dogModalLabel"
imagePath="/assets/images/unitree_a1.jpg" title="Autonomous Robodog" description="In my role as Head of AI
Demonstrators, my primary mission is to develop compelling showcases of AI's prowess, and a standout amongst these
is our autonomous robodog. This remarkable creation undergoes continuous enhancement through collaboration with our
talented students, who engage in project work and Bachelor's theses. Equipped with a camera and a LIDAR sensor, the
robodog processes this data to navigate its surroundings safely. It's not just a marvel of technology; it's a
responsive companion, capable of recognizing gestures, planning actions, and executing them with autonomy."
showLink=false url="" %}
{% include project-details.html modalId="txtimgModal" modalLabel="txtimgModalLabel"
imagePath="/assets/images/speech_to_img.jpg" title="Speech-2-Image" description="In my role as Head of AI
Demonstrators, my primary responsibility involves developing cutting-edge AI showcases. Among these, we developed a
remarkable Swiss-German speech-to-image generator, a multi-user web application. It effortlessly transcribes
Swiss-German speech to text, refines the text into a polished prompt, and subsequently generates an image that
corresponds to the prompt. Take a glance at the left, and you'll discover the result for the intriguing phrase,
'Älien Pflanze' which translates to 'alien plant'." showLink=true
url="https://cai.cloudlab.zhaw.ch/pages/cai_demos.html#swiss-german-to-image-generative-ai" %}
{% include project-details.html modalId="alpineModal" modalLabel="alpineModalLabel"
imagePath="/assets/images/swiss_gpt.png" title="SwissGPT PoC for AlpineAI" description="AlpineAI pioneers the
utilization of Language Model (LLM) technology to create exceptional value for businesses, prioritizing the utmost
data security and privacy. I spearheaded the research and development of SwissGPT, a cutting-edge LLM designed to
securely access sensitive corporate data, thereby unlocking the full potential of your company's knowledge through
the power of AI." showLink=true url="https://alpineai.ch/products/"
%}
{% include project-details.html modalId="autodidactModal" modalLabel="autodidactModalLabel"
imagePath="/assets/images/ICU_Cockpit.png" title="AUTODIDACT – Automated Video Data Annotation for Clinical Decision
Support" description="Monitoring diverse sensor signals of patients in intensive care can be key to detect
potentially fatal emergencies. But in order to perform the monitoring automatically, the monitoring system has to
know what is currently happening to the patient: if the patient is for example currently being moved by medical
staff, this would explain a sudden peak in the heart rate and would thus not be a sign of an emergency. Therefore,
the system is extended with video analysis capabilities and movements of the patient and the medical staff are
detected." showLink=true url="https://www.zhaw.ch/en/research/research-database/project-detailview/projektid/5230/"
%}
{% include project-details.html modalId="vt2Modal" modalLabel="vt2ModalLabel" imagePath="/assets/images/bca.PNG"
title="MSc. Project Thesis 2: End-to-End Pipeline for Body Composition Analysis and Sarcopenia Detection without
Target Labels" description="Body composition analysis can improve patient prognostication and contribute to higher
safety, efficiency and quality of the patient journey. This analysis is often neglected due to the high manual
effort or the lack of labels to develop a system for automatic processing. This project thesis proposes an automated
method for body composition analysis that can be carried out on 3D computer tomographic (CT) scans without labels
and with limited computational resources." showLink=true url="https://sagerpascal.github.io/mse-vt2/"
%}
{% include project-details.html modalId="realscoreModal" modalLabel="realscoreModalLabel"
imagePath="/assets/images/realscore.png" title="RealScore - Scanning of Real-World Sheet Music for a Digital Music
Stand" description="In a previous project, a solution to translate printed music scores into machine-readable music
sheets was developed. However, it only works for high quality input. To scale up business, it should work as well
for smartphone pictures, used sheets etc. Project RealScore enhances the successful predecessor project by making
deep learning adapt to unseen data through unsupervised learning." showLink=true
url="https://www.zhaw.ch/en/research/research-database/project-detailview/projektid/3005/"
%}
{% include project-details.html modalId="vt1Modal" modalLabel="vt1ModalLabel"
imagePath="/assets/images/speech-prediction.PNG" title="MSc. Project Thesis 1: Prediction of Subsequent Frames of
Mel-spectrograms" description="This thesis demonstrates the effectiveness of Gated Recurrent Units (GRU) in
predicting speech frames up to a single word's length. Our model, trained on the TIMIT dataset, predicts subsequent
frames of a Mel-spectrogram from a limited set of initial frames. We provide insights into the ideal number of input
frames, frame-level, and phoneme-level prediction accuracy. As a pioneering work in this field, we emphasize the
lessons learned and suggest potential enhancements for future research endeavors." showLink=true
url="https://sagerpascal.github.io/speech-prediction/"
%}
{% include project-details.html modalId="fwaModal" modalLabel="fwaModalLabel"
imagePath="/assets/images/kitro.jpg" title="FWA: Visual Food
Waste Analysis for Sustainable Kitchens" description="FWA is a project funded by Innosuisse, that focuses on
automating food waste classification. When food is discarded, an instant photo is captured, enabling us to calculate
the difference from the previous state. We meticulously segment and estimate the weight of the discarded items. This
data empowers kitchens to refine menu planning, thereby reducing food waste and enhancing sustainability."
showLink=true url="https://www.zhaw.ch/en/research/research-database/project-detailview/projektid/3006/"
%}
{% include project-details.html modalId="hackathonModal" modalLabel="hackathonModalLabel"
imagePath="/assets/images/Lunar.PNG" title="ZHAW Hackathon: Deep Reinforcement Learning" description=" I
successfully completed a Reinforcement Learning course at Zurich University of Applied Sciences, culminating in a
challenging 3-day hackathon where I excelled and earned the top grade of 6.0. I've shared the code at
https://github.com/sagerpascal/rl-bootcamp-hackathon, featuring highly efficient algorithms for Lunar Lander,
yielding outstanding results in sample effectiveness, training duration, and average reward. Furthermore, I've
meticulously documented the project for your reference." showLink=true
url="https://sagerpascal.github.io/rl-bootcamp-hackathon/"
%}
{% include project-details.html modalId="bscModal" modalLabel="bscModalLabel" imagePath="/assets/images/BA.PNG"
title="Bachelor Thesis" description="In my bachelor thesis, the digitalization of the products of the company Spühl
GmbH is examined on the basis of different aspects. The existing processes are critically scrutinized in order to
minimize the project and product risk and to involve the customer more closely into the development process. The
proposed concept for new processes is validated with the development of a new software product. The application
enriches the already captured machine data with data from the production process. The application consists of a Java
backend (with spring boot), a swagger interface and a React frontend. However, details cannot be provided due to a
non-disclosure agreement. The Bachelor thesis was graded with the highest grade 6.0." showLink=false url=""
%}
{% include project-details.html modalId="hatespeechModal" modalLabel="hatespeechModalLabel"
imagePath="/assets/images/hate-speech.PNG" title="Hate Speech Detection" description="This my first deep learning
projects in the field of NLP and therefore futured on my homepage. According to a guide at
(https://developers.google.com/machine-learning/guides/text-classification/step-2-5?hl=hi), I trained a sepCNN model
on about 150'000 Twitter posts. In the end, the trained model predicted with an accuracy of 96% if a given text is
hate speech." showLink=true url="https://github.com/sagerpascal/HateSpeechDetection"
%}
{% include project-details.html modalId="paModal" modalLabel="paModalLabel" imagePath="/assets/images/PA.png"
title="Bsc. Project Thesis (5th semester BSc.)" description="In the project thesis during my Bachelor's studies, I
developed a solution for data acquisition of machines from the company Spühl GmbH together with a fellow student.
The application is based on Azure IoT. Different docker containers collect various data, then this data is optimized
with Stream-Analytics, sent to the cloud and optionally stored in a database or visualized using PowerBI."
showLink=false url="" %}
</div>
<div id="projects">
<h2 class="heading reveal2">Projects</h2>
<div>
{% include project.html modalId="#gibModal" imagePath="/assets/images/GIB.png" title="Sustainable GPU-Cluster"
%}
{% include project.html modalId="#mlbcaModal" imagePath="/assets/images/ml-bca.jpeg" title="ML-BCA" %}
{% include project.html modalId="#mscModal" imagePath="/assets/images/msc_thesis.png" title="MSc. Thesis" %}
{% include project.html modalId="#dogModal" imagePath="/assets/images/unitree_a1.jpg" title="Robodog" %}
{% include project.html modalId="#txtimgModal" imagePath="/assets/images/speech_to_img.jpg"
title="Speech-2-Image" %}
{% include project.html modalId="#alpineModal" imagePath="/assets/images/swiss_gpt.png" title="SwissGPT" %}
{% include project.html modalId="#autodidactModal" imagePath="/assets/images/ICU_Cockpit.png" title="AutoDidact"
%}
{% include project.html modalId="#vt2Modal" imagePath="/assets/images/bca.PNG" title="Sarcopenia Detection" %}
{% include project.html modalId="#realscoreModal" imagePath="/assets/images/realscore.png" title="Real Score" %}
{% include project.html modalId="#vt1Modal" imagePath="/assets/images/speech-prediction.PNG" title="Speech
Prediction" %}
{% include project.html modalId="#fwaModal"
imagePath="/assets/images/kitro.jpg" title="Food Waste
Analysis" %}
{% include project.html modalId="#hackathonModal" imagePath="/assets/images/Lunar.PNG" title="ZHAW Hackathon" %}
{% include project.html modalId="#bscModal" imagePath="/assets/images/BA.PNG" title="Bachelor Thesis" %}
{% include project.html modalId="#hatespeechModal" imagePath="/assets/images/hate-speech.PNG" title="Hate Speech
Detection" %}
{% include project.html modalId="#paModal" imagePath="/assets/images/PA.png" title="Industrial IoT" %}
</div>
</div>
{% include skills.html %}
{% include contact.html %}
{% include footer.html %}
<script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.12.4/jquery.min.js"></script>
<script type="text/javascript" src="/assets/js/particleground.js"></script>