Skip to content

Commit

Permalink
a little more consistency in reference anchors
Browse files Browse the repository at this point in the history
  • Loading branch information
nconrad committed Aug 1, 2024
1 parent d569fd1 commit e92ca02
Show file tree
Hide file tree
Showing 5 changed files with 15 additions and 15 deletions.
6 changes: 3 additions & 3 deletions src/pages/science/lightning-detector.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Additionally, the data collected by SDR is collected at about 10 MB/s making it

## How Lightning is Created

According to the article that NWS provides[1], the conditions needed to produce lightning have been known for some time. However, exactly how lightning forms has never been verified so there is room for debate. Leading theories focus around separation of electric charge and generation of an electric field within a thunderstorm. Recent studies also indicate that ice, hail, and semi-frozen water drops known as graupel are essential to lightning development. Storms that fail to produce large quantities of ice usually fail to produce lightning.
According to the article that NWS provides<sup>[[1](#references)]</sup>, the conditions needed to produce lightning have been known for some time. However, exactly how lightning forms has never been verified so there is room for debate. Leading theories focus around separation of electric charge and generation of an electric field within a thunderstorm. Recent studies also indicate that ice, hail, and semi-frozen water drops known as graupel are essential to lightning development. Storms that fail to produce large quantities of ice usually fail to produce lightning.

### Charge Separation

Expand Down Expand Up @@ -37,6 +37,6 @@ As a preliminary result, we were able to recieve a distinguishable signal.

We are expecting that we can collect sufficient positive and negative lightning data, so that we can build a process to distinguish them. Additionally, when Waggle/Sage nodes including this SDR are deployed and formed a grid, we will implement a way to triangulate the location of a lighting strike.

## Citations
## References

[1] https://www.weather.gov/source/zhu/ZHU_Training_Page/lightning_stuff/lightning2/lightning_intro.html
1. https://www.weather.gov/source/zhu/ZHU_Training_Page/lightning_stuff/lightning2/lightning_intro.html
4 changes: 2 additions & 2 deletions src/pages/science/scalable-ci-in-aps.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,13 @@

Can Edge Computing Be Used in X-Ray Beamline Experiments to Process a High-Volume and Fast Data Stream and Help Scientists Make Real-Time Decisions for Experiments?

Edge computing offers computation close to sensors for real-time data processing. Various X-ray sensor beamlines at Argonne's Advanced Photon Source (APS) stream enormous data at a fast frame rate to the cloud for data analysis, and science discovery happens in a high performance computing (HPC) facility. The scientists and engineers in the beamlines have recently added the data streaming and processing feature to their PvaPy<sup>[[1]](#references)</sup> software package. This feature enables users to process data in real-time, allowing them to see experiment results immediately, as data is streamed to HPC, and make decisions as experiments take place. Edge computing can provide the same real-time data processing capability by deploying computation next to the sensors and can host AI@Edge applications that process data directly from the sensor. The Sage team questioned whether edge computing could scale its AI computation and match the requirement to provide the same real-time data processing capability while reducing the latency by not moving data to the cloud.
Edge computing offers computation close to sensors for real-time data processing. Various X-ray sensor beamlines at Argonne's Advanced Photon Source (APS) stream enormous data at a fast frame rate to the cloud for data analysis, and science discovery happens in a high performance computing (HPC) facility. The scientists and engineers in the beamlines have recently added the data streaming and processing feature to their PvaPy<sup>[[1](#references)]</sup> software package. This feature enables users to process data in real-time, allowing them to see experiment results immediately, as data is streamed to HPC, and make decisions as experiments take place. Edge computing can provide the same real-time data processing capability by deploying computation next to the sensors and can host AI@Edge applications that process data directly from the sensor. The Sage team questioned whether edge computing could scale its AI computation and match the requirement to provide the same real-time data processing capability while reducing the latency by not moving data to the cloud.

## Configuring Edge Computing in The Beamline
![Dataflow](imgs/scalable-ci-in-aps-1.jpg)
> Figure 1: Dataflow of the pipeline: from X-ray detector to a visualization computer. Note that the control program in the diagram was designed for automatic scaling, but not implemented in this work.
To understand how edge computing plays in this domain, the Sage team established a workflow pipeline connecting an X-ray detector with a visualization computer placed at the end of the pipeline. We used multiple of a 1U server rack, equipped with a Nvidia T4 GPU accelerator, as an edge computing node and configured them in the middle of the pipeline to offer AI computation. The edge nodes were connected to the detector via a high-speed 10 Gbps network, however each node’s network supported up to 1 Gbps network. The nodes hosted computing resources for running the scientist-developed machine learning (ML) model<sup>[[2]](#references)</sup> after we quantized it to make the inference faster, though the process sacrifices up to ~10% of accuracy. This allowed the nodes to run more instances of the model using the same computing resource. The X-ray detector was configured to stream 0.5 Mega-pixel frames at the frame rate of 1 - 2 kHz. To understand computation and network loads required in the workflow pipeline, we varied the number of edge nodes and instances of the AI@Edge application.
To understand how edge computing plays in this domain, the Sage team established a workflow pipeline connecting an X-ray detector with a visualization computer placed at the end of the pipeline. We used multiple of a 1U server rack, equipped with a Nvidia T4 GPU accelerator, as an edge computing node and configured them in the middle of the pipeline to offer AI computation. The edge nodes were connected to the detector via a high-speed 10 Gbps network, however each node’s network supported up to 1 Gbps network. The nodes hosted computing resources for running the scientist-developed machine learning (ML) model<sup>[[2](#references)]</sup> after we quantized it to make the inference faster, though the process sacrifices up to ~10% of accuracy. This allowed the nodes to run more instances of the model using the same computing resource. The X-ray detector was configured to stream 0.5 Mega-pixel frames at the frame rate of 1 - 2 kHz. To understand computation and network loads required in the workflow pipeline, we varied the number of edge nodes and instances of the AI@Edge application.

## Performance of The Pipeline
![tracking traffic](imgs/scalable-ci-in-aps-4.gif)
Expand Down
10 changes: 5 additions & 5 deletions src/pages/science/smoke-detection.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

![](imgs/smoke_wildfire.jpg)

Forest fires are a major problem, and have detrimental effects on the environment. Current solutions to detecting forest fires are not efficient enough, and other machine learning models have far too long computational speeds and poor accuracies. This study is a continuation of the work done by UCSD and their SmokeyNet deep learning architecture for smoke detection[1]. We compared performance of deep learning models, in order to find the best model for this issue, and to find if a simple model can compare to a complex model. The models are: VGG16, UCSD SmokeyNet, Resnet18, Resnet34, and Resnet50.
Forest fires are a major problem, and have detrimental effects on the environment. Current solutions to detecting forest fires are not efficient enough, and other machine learning models have far too long computational speeds and poor accuracies. This study is a continuation of the work done by UCSD and their SmokeyNet deep learning architecture for smoke detection<sup>[[1](#references)]</sup>. We compared performance of deep learning models, in order to find the best model for this issue, and to find if a simple model can compare to a complex model. The models are: VGG16, UCSD SmokeyNet, Resnet18, Resnet34, and Resnet50.

## The Data

Expand Down Expand Up @@ -36,9 +36,9 @@ The classifier was able to detect smoke patches accurately from images collected

For this work, we created a large dataset and open for public for any related research for future research such as for better model creation. It is needed to explore the dataset more ways to augment the images, by scaling the contrast levels, etc, as this would be a good way to separate smoke from cloud from other. Through this experiments, we found that a simple model can be acceptably accurate and can compare to a complex model. We are hoping that this research can greatly help the fight against forest fires, in order to at one point support the problem solving of forest fires, by being able to attend to them before they get out of control.

## Citations
[1] Dewangan, A., Pande, Y., Braun, H.W., Vernon, F., Perez, I., Altintas, I., Cottrell, G.W. and Nguyen, M.H., 2022. FIgLib & SmokeyNet: Dataset and deep learning model for real-time wildland fire smoke detection. Remote Sensing, 14(4), p.1007.
## References
1. Dewangan, A., Pande, Y., Braun, H.W., Vernon, F., Perez, I., Altintas, I., Cottrell, G.W. and Nguyen, M.H., 2022. FIgLib & SmokeyNet: Dataset and deep learning model for real-time wildland fire smoke detection. Remote Sensing, 14(4), p.1007.

[2] https://hpwren.ucsd.edu
2. https://hpwren.ucsd.edu

[3] http://vintage.winklerbros.net/swimseg.html
3. http://vintage.winklerbros.net/swimseg.html
8 changes: 4 additions & 4 deletions src/pages/science/snow-detection.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ First, the images needed to be preprocessed and transformed. One problem snow de

Our goal was to create a machine learning model that could detect whether there was snow on the ground around the river. Convolutional neural networks are the main tool of choice for these kinds of image related tasks. They work by using a sliding "window" across an image to capture relationships and patterns between pixels across the image. This sliding window approach reduces the number of parameters and complexity of the model. There are already a multitude of pre-trained convolutional network models out there that perform well on image classification tasks, but there aren't any deep learning models trained specifically for snow detection. _transfer learning_ comes to the rescue to make training a new model incredibly easy with limited time and computational power.

Transfer learning works by taking an image classification model that someone else has already taken the time to train reusing it for a new purpose. We utilized ResNet50[1], a popular convolutional neural network model that pioneered a technique called residual connections. Residual connections allow neural networks to optimize quickly while still being deep enough to capture complex relationships. ResNet50 is a very deep network with fifty layers (hence the name) and would take a lot of time and computing power to train even with the residual connections, but some free pre-trained models are essentially plug-and-play with only small modifications. A visualization of ResNet50's architecture is seen below[2].
Transfer learning works by taking an image classification model that someone else has already taken the time to train reusing it for a new purpose. We utilized ResNet50<sup>[[1](#references)]</sup>, a popular convolutional neural network model that pioneered a technique called residual connections. Residual connections allow neural networks to optimize quickly while still being deep enough to capture complex relationships. ResNet50 is a very deep network with fifty layers (hence the name) and would take a lot of time and computing power to train even with the residual connections, but some free pre-trained models are essentially plug-and-play with only small modifications. A visualization of ResNet50's architecture is seen below<sup>[[2](#references)]</sup.

![ResNet50 Model (without additional layers)](imgs/snow_ResNet50.png)

Expand All @@ -30,7 +30,7 @@ The classifier was able to detect snow incredibly accurately from images collect

We weren't able to get additional data from the Bad River, but additional work could look at using these images to predict turbidity data and other information about the river. This could be used to facilitate and predict wild rice yields as well. More data from other Waggle/Sage nodes could also be used to create a more general snow classifier that could be used at other locations with more confidence, but for now it's best only at the Bad River site.

## Citations
[1] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. doi:10.1109/cvpr.2016.90
## References
1. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. doi:10.1109/cvpr.2016.90

[2] https://commons.wikimedia.org/wiki/File:ResNet50.png
2. https://commons.wikimedia.org/wiki/File:ResNet50.png
2 changes: 1 addition & 1 deletion src/pages/science/wildfire-science.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ Hopefully, as we improve the technology to detect these fires early on, we can s


![The Amazon Rainforest](./imgs/wildfire-1.jpg)
> The Amazon Rainforest, home to many peoples and countless species. A home worth protecting.11
> The Amazon Rainforest, home to many peoples and countless species. A home worth protecting. <sup>[[11](#references)]</sup>
## References
1. “How Wildfires Work”. https://science.howstuffworks.com/nature/natural-disasters/wildfire.htm
Expand Down

0 comments on commit e92ca02

Please sign in to comment.