From 70c4fcb391683647597f6de48fc5681190076607 Mon Sep 17 00:00:00 2001 From: dormant-user Date: Sun, 29 Sep 2024 13:18:42 -0500 Subject: [PATCH] Remove redundancy in pieChart instances - JavaScript Remove redundancy in dockerStats - JavaScript Auto calculate current process metrics by default Reuse disk usage metrics to avoid redundant calls Update runbook and README.md --- README.md | 11 ++- docs/README.html | 11 ++- docs/README.md | 11 ++- docs/_sources/README.md.txt | 11 ++- docs/genindex.html | 2 + docs/index.html | 17 ++++ docs/objects.inv | Bin 1689 -> 1695 bytes docs/searchindex.js | 2 +- pyninja/main.py | 2 + pyninja/monitor/resources.py | 25 ++++- pyninja/monitor/routes.py | 4 + pyninja/monitor/templates/main.html | 144 ++++++++-------------------- pyninja/operations.py | 35 ++++--- pyninja/version.py | 2 +- release_notes.rst | 51 +++++++--- 15 files changed, 181 insertions(+), 147 deletions(-) diff --git a/README.md b/README.md index cdf7e42..b5cf0eb 100644 --- a/README.md +++ b/README.md @@ -53,19 +53,24 @@ pyninja start > _By default, `PyNinja` will look for a `.env` file in the current working directory._ +- **APIKEY** - API Key for authentication. - **NINJA_HOST** - Hostname for the API server. - **NINJA_PORT** - Port number for the API server. -- **WORKERS** - Number of workers for the uvicorn server. - **REMOTE_EXECUTION** - Boolean flag to enable remote execution. - **API_SECRET** - Secret access key for running commands on server remotely. - **MONITOR_USERNAME** - Username to authenticate the monitoring page. - **MONITOR_PASSWORD** - Password to authenticate the monitoring page. - **MONITOR_SESSION** - Session timeout for the monitoring page. - **MAX_CONNECTIONS** - Maximum number of monitoring sessions allowed in parallel. -- **SERVICE_MANAGER** - Service manager filepath to handle the service status requests. +- **PROCESSES** - List of process names to include in the monitor page. +- **SERVICES** - List of service names to include in the monitor page. +- **GPU_LIB** - GPU library filepath to use for monitoring. +- **DISK_LIB** - Disk library filepath to use for monitoring. +- **SERVICE_LIB** - Memory library filepath to use for monitoring. +- **PROCESSOR_LIB** - Processor library filepath to use for monitoring. - **DATABASE** - FilePath to store the auth database that handles the authentication errors. - **RATE_LIMIT** - List of dictionaries with `max_requests` and `seconds` to apply as rate limit. -- **APIKEY** - API Key for authentication. +- **LOG_CONFIG** - Logging configuration file path. ⚠️ Enabling remote execution can be extremely risky and a major security threat. So use **caution** and set the **API_SECRET** to a strong value. diff --git a/docs/README.html b/docs/README.html index fc0d025..c77ad6e 100644 --- a/docs/README.html +++ b/docs/README.html @@ -90,19 +90,24 @@

Environment Variables

By default, PyNinja will look for a .env file in the current working directory.

    +
  • APIKEY - API Key for authentication.

  • NINJA_HOST - Hostname for the API server.

  • NINJA_PORT - Port number for the API server.

  • -
  • WORKERS - Number of workers for the uvicorn server.

  • REMOTE_EXECUTION - Boolean flag to enable remote execution.

  • API_SECRET - Secret access key for running commands on server remotely.

  • MONITOR_USERNAME - Username to authenticate the monitoring page.

  • MONITOR_PASSWORD - Password to authenticate the monitoring page.

  • MONITOR_SESSION - Session timeout for the monitoring page.

  • MAX_CONNECTIONS - Maximum number of monitoring sessions allowed in parallel.

  • -
  • SERVICE_MANAGER - Service manager filepath to handle the service status requests.

  • +
  • PROCESSES - List of process names to include in the monitor page.

  • +
  • SERVICES - List of service names to include in the monitor page.

  • +
  • GPU_LIB - GPU library filepath to use for monitoring.

  • +
  • DISK_LIB - Disk library filepath to use for monitoring.

  • +
  • SERVICE_LIB - Memory library filepath to use for monitoring.

  • +
  • PROCESSOR_LIB - Processor library filepath to use for monitoring.

  • DATABASE - FilePath to store the auth database that handles the authentication errors.

  • RATE_LIMIT - List of dictionaries with max_requests and seconds to apply as rate limit.

  • -
  • APIKEY - API Key for authentication.

  • +
  • LOG_CONFIG - Logging configuration file path.

⚠️ Enabling remote execution can be extremely risky and a major security threat. So use caution and set the API_SECRET to a strong value.

diff --git a/docs/README.md b/docs/README.md index cdf7e42..b5cf0eb 100644 --- a/docs/README.md +++ b/docs/README.md @@ -53,19 +53,24 @@ pyninja start > _By default, `PyNinja` will look for a `.env` file in the current working directory._ +- **APIKEY** - API Key for authentication. - **NINJA_HOST** - Hostname for the API server. - **NINJA_PORT** - Port number for the API server. -- **WORKERS** - Number of workers for the uvicorn server. - **REMOTE_EXECUTION** - Boolean flag to enable remote execution. - **API_SECRET** - Secret access key for running commands on server remotely. - **MONITOR_USERNAME** - Username to authenticate the monitoring page. - **MONITOR_PASSWORD** - Password to authenticate the monitoring page. - **MONITOR_SESSION** - Session timeout for the monitoring page. - **MAX_CONNECTIONS** - Maximum number of monitoring sessions allowed in parallel. -- **SERVICE_MANAGER** - Service manager filepath to handle the service status requests. +- **PROCESSES** - List of process names to include in the monitor page. +- **SERVICES** - List of service names to include in the monitor page. +- **GPU_LIB** - GPU library filepath to use for monitoring. +- **DISK_LIB** - Disk library filepath to use for monitoring. +- **SERVICE_LIB** - Memory library filepath to use for monitoring. +- **PROCESSOR_LIB** - Processor library filepath to use for monitoring. - **DATABASE** - FilePath to store the auth database that handles the authentication errors. - **RATE_LIMIT** - List of dictionaries with `max_requests` and `seconds` to apply as rate limit. -- **APIKEY** - API Key for authentication. +- **LOG_CONFIG** - Logging configuration file path. ⚠️ Enabling remote execution can be extremely risky and a major security threat. So use **caution** and set the **API_SECRET** to a strong value. diff --git a/docs/_sources/README.md.txt b/docs/_sources/README.md.txt index cdf7e42..b5cf0eb 100644 --- a/docs/_sources/README.md.txt +++ b/docs/_sources/README.md.txt @@ -53,19 +53,24 @@ pyninja start > _By default, `PyNinja` will look for a `.env` file in the current working directory._ +- **APIKEY** - API Key for authentication. - **NINJA_HOST** - Hostname for the API server. - **NINJA_PORT** - Port number for the API server. -- **WORKERS** - Number of workers for the uvicorn server. - **REMOTE_EXECUTION** - Boolean flag to enable remote execution. - **API_SECRET** - Secret access key for running commands on server remotely. - **MONITOR_USERNAME** - Username to authenticate the monitoring page. - **MONITOR_PASSWORD** - Password to authenticate the monitoring page. - **MONITOR_SESSION** - Session timeout for the monitoring page. - **MAX_CONNECTIONS** - Maximum number of monitoring sessions allowed in parallel. -- **SERVICE_MANAGER** - Service manager filepath to handle the service status requests. +- **PROCESSES** - List of process names to include in the monitor page. +- **SERVICES** - List of service names to include in the monitor page. +- **GPU_LIB** - GPU library filepath to use for monitoring. +- **DISK_LIB** - Disk library filepath to use for monitoring. +- **SERVICE_LIB** - Memory library filepath to use for monitoring. +- **PROCESSOR_LIB** - Processor library filepath to use for monitoring. - **DATABASE** - FilePath to store the auth database that handles the authentication errors. - **RATE_LIMIT** - List of dictionaries with `max_requests` and `seconds` to apply as rate limit. -- **APIKEY** - API Key for authentication. +- **LOG_CONFIG** - Logging configuration file path. ⚠️ Enabling remote execution can be extremely risky and a major security threat. So use **caution** and set the **API_SECRET** to a strong value. diff --git a/docs/genindex.html b/docs/genindex.html index 475c246..6b948eb 100644 --- a/docs/genindex.html +++ b/docs/genindex.html @@ -376,6 +376,8 @@

L

M

    +
  • map_docker_stats() (in module pyninja.monitor.resources) +
  • max_connections (pyninja.models.EnvConfig attribute)
  • max_requests (pyninja.models.RateLimit attribute) diff --git a/docs/index.html b/docs/index.html index 887311a..1e6e226 100644 --- a/docs/index.html +++ b/docs/index.html @@ -1884,6 +1884,23 @@

    Authenticator

    Resources

    +
    +
    +pyninja.monitor.resources.map_docker_stats(json_data: Dict[str, str]) Dict[str, str]
    +

    Map the JSON data to a dictionary.

    +
    +
    Parameters:
    +

    json_data – JSON data from the docker stats command.

    +
    +
    Returns:
    +

    Returns a dictionary with container stats.

    +
    +
    Return type:
    +

    Dict[str, str]

    +
    +
    +
    +
    pyninja.monitor.resources.get_cpu_percent(cpu_interval: int) List[float]
    diff --git a/docs/objects.inv b/docs/objects.inv index 0922060231b94185253739ad865caf2b74842f3d..50d67fc1386576fb6de91f960424adbfbecf5d6c 100644 GIT binary patch delta 1582 zcmV+}2GRMM4WA8=gn!+l=pnZtS+oI?#@I!WBSDK}n_Y@z_~Es4>TC4%`Xn8ak}PKz zT@96+Ec5s0*Kmd$%AT~;fANOLuwU8JtImk1VDiWt~iZ{3G zk8l1+&^GEeNTXLbQ|w9kt#KPE1ZV=Y=UOOW>6w9Z#t~SlTYm=~=l~Q-!2uMzArrsM zBLok8!#xlNdqA2U1m>VMPSEOVY^(%v{LBGptZ5&>$*&+NT^ym@Nv;%YjSgIDaGqN4 zC?Pw(gL?d6RO~AGvGvRc9%7w{iyGhW$U_0g7Hlfw2~%)}n%C=hci=5PW-}fCnNwqP z2STE~gxirKOMl#h_xNx-talpyN#93njSC|B#bncZkXWiQR><9 z4O3FbBcVL13-jpb6UpKId`Q03mW+{noAvj*y;6Yw=A9EL^J{PP>Cp{-EJ#1XclmrU8BLP&IaZfY1`59=q zQh(+VU8r$#C$+DtHKzu#(Q!yW(iKX+w2qZ}YwAd6i5>oYPafzIsg1K|fmTV-fercV z8VRWnTu}%5jDLr?`&5h`dcW1EX-DdWf};#4*6tzYA}L9eoZi6=p5yK`m&?b7yjE}) zWy?cJxy{bKJj0h$dm9*T-bH%Lm0!@DW`E1^uqoKL>$z5L2O9Feyfx-Q!V;;HswCto zpO@*+e#?dOGD=;WoF=Qaz{%~2;elHcbC>i~{cQkQ`PZ=&mu zIK;+}Y`K20ky*f>I#f2H3&c*A&x@My5*Q3Yyc*_rg2F*a;* z$pS+z%1W`6r`*c@k4nc|;}@%tZh!nHI>UhWHbLfxHjwzfQiA#PM6m&IcslC1Zh=(Y zg#l4+<}Wj;Gh2Ly&IyRJd3@oR^gM_M6*(&#O%74LFUH_HcQ^_#d?}Icb?V4-Q%Pvr z-=ypt=FzMr%34bmv<3hN8 z>66e7I^#}@M!st4Kf!U&GhA|!@WD}5)V>F(ie}_zK56bGaNY0Z^&r@V1nF zmmaUq#d^6(iO-;LFkltTApGy&|M(RajOkn3?3aJILK}akE_=Ej{wFA6AIkD5wNCzh g0b8GN1Eeo$58So;OR&5v%zgQPjS=(eKiT&|W5Oj5fB*mh delta 1576 zcmV+@2G{wY4Vev)gnvy@^pIQ7EZP7`W2_P6NYLWgVwWNnet4~%`Wk(`K1qk9B+J=F zS3~tC%l!TMHJl-b>Uu5ppSuYbO^fk(&){rIrgOd!PvbN0yKeXxE2and}d&+u>@A?+J8a|+5?4Ba010%k%^y{ z5rTWZ=MD&iJtEZ+JhM<6D`<69T_^-`{LBextf?Qs%C8_OU7VoQNv;&DjrLq>u#Q^q zBq3XVgmV03RA>wNsrAfx?n9Y~4I1Ac$wL9>8oF4-5vJe*Rd2TM?!k5Vn9OwgXGx9K zJqU^R;_oJkEPt^B=kVcfT<>fiTiox?+OcN2wRf zS4>GA&V=%)F3h7_PDG2-e6hi|JU4$~r(^L{g#cGCuK_(!HYYYs5z8^b5=3wfTEDQo z5uhmRCIi*mnP0CJckZPsCZfV9#XT_mrE*vR-x%>SFMr*`d$}i6n1SkIdfa;}_rb$r zcCtb`Iy1=S@SQ&WYV<)Kmv?hQsBv{Wt%=;+ooK$#Sc2dgSkX!$uRXsD+ET_j_) z^v1Fd?SJ}l*qg1Jt}s^{#tb|BL)^eruh8W89rK?%1?iSFmF1b;8SlSg|G)MR7k=~CEX<4r)B*jh(UjjH^S}l4Jv}>wW z?y~D4P1c1enyMLIFi|tQXqpCoH{~u!ea=`FPkL5%^ssR)lM^WDjs#F;rajHl<|m-y zN`IM0bfLz|qtvdb)|48=Mu#!|NLR@D(lS=+gDE3zN9^$D9eJQ9q%_W61ZpKgdp72; zOC+Q|aYY^I3;sXE-KS#o(EF`MNjqXE6r82+S=~U)MN*O`DZPUmJj-prluO5kv{rBy zWzBs~xh>AUJj3Txd+ix+-bH-Nm0QuA>VLrTuqoKL>$z0!2pZD9yf)@R!WyZZswCto zpO+cVe(QzOGKyWBohFO5z{*3<@W3sJI+w*mw4w~6M*(ijx-cljbpYp1QkQ`RXM!D$ zIK;+JY`%Wz0_y;OYG2rdoU2Xu5$OuUt ztAl|Sosr~G%@WC&S4-f-I*l<|vr6O<_AXjSXv{mc7Is7cKoKIPQ7I%PA5*I?96D8{o<}B*X^IW*O<1xuJOkzcCQSog4Y#10@>dyy-fi z^{on;pYR+$ej-5uAGCQ~XmRdKkfcp%1}f2`UCEm1(C!zzg@FZqinrOxEiTP03}_0a zo@wJsuy}DHKzUArAD}-1X@5AYV@@vfP z=o8o%Ge!4zZ{B{m+c+;?qkG=mzS`tBsl;a6{}F|6aJ& None: BASE_LOGGER.warning("Remote execution disabled") # Conditional endpoint based on monitor_username and monitor_password if all((models.env.monitor_username, models.env.monitor_password)): + models.env.processes.append(str(os.getpid())) PyNinjaAPI.routes.extend(get_all_monitor_routes(dependencies)) PyNinjaAPI.add_exception_handler( exc_class_or_status_code=exceptions.RedirectException, diff --git a/pyninja/monitor/resources.py b/pyninja/monitor/resources.py index b1fc772..c360f92 100644 --- a/pyninja/monitor/resources.py +++ b/pyninja/monitor/resources.py @@ -2,7 +2,6 @@ import json import logging import os -import shutil from concurrent.futures import ThreadPoolExecutor from typing import Dict, List @@ -15,6 +14,27 @@ LOGGER = logging.getLogger("uvicorn.default") +def map_docker_stats(json_data: Dict[str, str]) -> Dict[str, str]: + """Map the JSON data to a dictionary. + + Args: + json_data: JSON data from the docker stats command. + + Returns: + Dict[str, str]: + Returns a dictionary with container stats. + """ + return { + "Container ID": json_data.get("ID"), + "Container Name": json_data.get("Name"), + "CPU": json_data.get("CPUPerc"), + "Memory": json_data.get("MemPerc"), + "Memory Usage": json_data.get("MemUsage"), + "Block I/O": json_data.get("BlockIO"), + "Network I/O": json_data.get("NetIO"), + } + + def get_cpu_percent(cpu_interval: int) -> List[float]: """Get CPU usage percentage. @@ -45,7 +65,7 @@ async def get_docker_stats() -> List[Dict[str, str]]: LOGGER.debug(stderr.decode().strip()) return [] return [ - {key: value for key, value in json.loads(line).items() if key != "PIDs"} + map_docker_stats(json.loads(line)) for line in stdout.decode().strip().splitlines() ] @@ -62,7 +82,6 @@ async def get_system_metrics() -> Dict[str, dict]: return dict( memory_info=psutil.virtual_memory()._asdict(), swap_info=psutil.swap_memory()._asdict(), - disk_info=shutil.disk_usage("/")._asdict(), load_averages=dict(m1=m1, m5=m5, m15=m15), ) diff --git a/pyninja/monitor/routes.py b/pyninja/monitor/routes.py index 60c0279..785409e 100644 --- a/pyninja/monitor/routes.py +++ b/pyninja/monitor/routes.py @@ -195,7 +195,10 @@ async def websocket_endpoint(websocket: WebSocket, session_token: str = Cookie(N session_timestamp = models.ws_session.client_auth.get(websocket.client.host).get( "timestamp" ) + # Base task with a placeholder asyncio sleep to start the task loop task = asyncio.create_task(asyncio.sleep(0.1)) + # Store disk usage information (during startup) to avoid repeated calls + disk_info = shutil.disk_usage("/")._asdict() while True: # Validate session asynchronously (non-blocking) # This way of handling session validation is more efficient than using a blocking call @@ -225,6 +228,7 @@ async def websocket_endpoint(websocket: WebSocket, session_token: str = Cookie(N await websocket.close() break data = await resources.system_resources() + data["disk_info"] = disk_info try: await websocket.send_json(data) except WebSocketDisconnect: diff --git a/pyninja/monitor/templates/main.html b/pyninja/monitor/templates/main.html index 7e53064..bf05769 100644 --- a/pyninja/monitor/templates/main.html +++ b/pyninja/monitor/templates/main.html @@ -342,15 +342,6 @@

    Docker Stats

    - - - - - - - - - @@ -407,7 +398,7 @@

    Process Stats

    } // Function to create the table head - function createHead(tableId, tableCSS) { + function handleTable(dataJSON, tableId, tableCSS) { // Show the service and the table const statsService = document.getElementById(tableCSS); statsService.style.display = "flex"; @@ -416,13 +407,14 @@

    Process Stats

    if (tableHead.children.length === 0) { const col = document.createElement('tr'); // Loop through the JSON data and create the table head - for (const key in serviceStatsJSON[0]) { + for (const key in dataJSON[0]) { const th = document.createElement('th'); th.innerText = key; col.appendChild(th); } tableHead.appendChild(col); } + populateTable(dataJSON, tableId); } // Function to populate data into the table @@ -460,29 +452,7 @@

    Process Stats

    const dockerStatsJSON = data.docker_stats; // Check if dockerStatsJSON is valid if (dockerStatsJSON && dockerStatsJSON.length > 0) { - // Show the container and the table - const statsContainer = document.getElementById("docker-stats"); - statsContainer.style.display = "flex"; - const table = document.getElementById("dockerStatsTable"); - table.style.display = "table"; - // Get reference to the table body - const tableBody = document.querySelector('#dockerStatsTable tbody'); - // Clear the existing table rows - tableBody.innerHTML = ''; - // Loop through the JSON data and populate the table - dockerStatsJSON.forEach(container => { - const row = document.createElement('tr'); - row.innerHTML = ` - - - - - - - - `; - tableBody.appendChild(row); - }); + handleTable(dockerStatsJSON, "dockerStatsTable", "docker-stats"); } else { // Hide the container if no data is available document.getElementById("docker-stats").style.display = "none"; @@ -491,8 +461,7 @@

    Process Stats

    const serviceStatsJSON = data.service_stats; // Check if serviceStatsJSON is valid if (serviceStatsJSON && serviceStatsJSON.length > 0) { - createHead("serviceStatsTable", "service-stats"); - populateTable(serviceStatsJSON, "serviceStatsTable"); + handleTable(serviceStatsJSON, "serviceStatsTable", "service-stats"); } else { // Hide the container if no data is available document.getElementById("service-stats").style.display = "none"; @@ -501,8 +470,7 @@

    Process Stats

    const processStatsJSON = data.process_stats; // Check if processStatsJSON is valid if (processStatsJSON && processStatsJSON.length > 0) { - createHead('processStatsTable', "process-stats"); - populateTable(processStatsJSON, 'processStatsTable'); + handleTable(processStatsJSON, "processStatsTable", "process-stats"); } else { // Hide the container if no data is available document.getElementById("process-stats").style.display = "none"; @@ -601,21 +569,16 @@

    Process Stats

    }); } - // Memory Chart - document.getElementById("memoryTotal").innerText = `Total: ${formatBytes(memoryInfo.total)}`; - if (memoryChartInstance) { - memoryChartInstance.data.datasets[0].data = [memoryInfo.used, memoryInfo.total - memoryInfo.used]; - memoryChartInstance.update(); - } else { - const memoryChart = document.getElementById('memoryChart').getContext('2d'); - memoryChartInstance = new Chart(memoryChart, { + // Function to create a pieChart instances for Memory, Swap and Disk utilization + function createChartInstance(pieChart, chartLabel, colors, chartData) { + return new Chart(pieChart, { type: 'pie', data: { labels: ['Used', 'Free'], datasets: [{ - label: 'Memory Usage', - data: [memoryInfo.used, memoryInfo.total - memoryInfo.used], - backgroundColor: ['#FF6384', '#36A2EB'] + label: chartLabel, + data: chartData, + backgroundColor: colors }] }, options: { @@ -635,6 +598,21 @@

    Process Stats

    }); } + // Memory Chart + document.getElementById("memoryTotal").innerText = `Total: ${formatBytes(memoryInfo.total)}`; + if (memoryChartInstance) { + memoryChartInstance.data.datasets[0].data = [memoryInfo.used, memoryInfo.total - memoryInfo.used]; + memoryChartInstance.update(); + } else { + const memoryChart = document.getElementById('memoryChart').getContext('2d'); + memoryChartInstance = createChartInstance( + memoryChart, + 'Memory Usage', + ['#36A2EB', '#FFCE56'], + [memoryInfo.used, memoryInfo.total - memoryInfo.used] + ); + } + // Swap Chart const swapChart = document.getElementById('swapChart'); if (swapChart) { @@ -643,35 +621,14 @@

    Process Stats

    if (swapChartInstance) { swapChartInstance.data.datasets[0].data = [swapInfo.used, swapInfo.total - swapInfo.used]; swapChartInstance.update(); - } else { - if (swapChart) { - const swapContext = swapChart.getContext('2d') - swapChartInstance = new Chart(swapContext, { - type: 'pie', - data: { - labels: ['Used', 'Free'], - datasets: [{ - label: 'Swap Usage', - data: [swapInfo.used, swapInfo.total - swapInfo.used], - backgroundColor: ['#FFCE56', '#E7E9ED'] - }] - }, - options: { - responsive: true, - plugins: { - tooltip: { - callbacks: { - label: function (tooltipItem) { - const value = tooltipItem.raw; - const formattedValue = formatBytes(value); - return `${tooltipItem.label}: ${formattedValue}`; - } - } - } - } - } - }); - } + } else if (swapChart) { + // swapChart is an optional chart, so create context only when available + swapChartInstance = createChartInstance( + swapChart.getContext('2d'), + 'Swap Usage', + ['#FFCE56', '#E7E9ED'], + [swapInfo.used, swapInfo.total - swapInfo.used] + ); } // Disk Chart @@ -681,31 +638,12 @@

    Process Stats

    diskChartInstance.update(); } else { const diskChart = document.getElementById('diskChart').getContext('2d'); - diskChartInstance = new Chart(diskChart, { - type: 'pie', - data: { - labels: ['Used', 'Free'], - datasets: [{ - label: 'Disk Usage', - data: [diskInfo.used, diskInfo.total - diskInfo.used], - backgroundColor: ['#63950d', '#ca7b00'] - }] - }, - options: { - responsive: true, - plugins: { - tooltip: { - callbacks: { - label: function (tooltipItem) { - const value = tooltipItem.raw; - const formattedValue = formatBytes(value); - return `${tooltipItem.label}: ${formattedValue}`; - } - } - } - } - } - }); + diskChartInstance = createChartInstance( + diskChart, + 'Disk Usage', + ['#63950d', '#ca7b00'], + [diskInfo.used, diskInfo.total - diskInfo.used] + ); } }; diff --git a/pyninja/operations.py b/pyninja/operations.py index b2d5f13..381a5cc 100644 --- a/pyninja/operations.py +++ b/pyninja/operations.py @@ -43,18 +43,23 @@ def get_process_info(proc: psutil.Process) -> Dict[str, str | int]: write_io = squire.size_converter(io_counters.write_bytes) except AttributeError: read_io, write_io = "N/A", "N/A" - return { - "PID": proc.pid, - "Name": proc.name(), - "CPU": f"{proc.cpu_percent(models.MINIMUM_CPU_UPDATE_INTERVAL):.2f}%", - "Memory": squire.size_converter(proc.memory_info().rss), # Resident Set Size, - "Uptime": squire.format_timedelta( - timedelta(seconds=int(time.time() - proc.create_time())) - ), - "Threads": proc.num_threads(), - "Read I/O": read_io, - "Write I/O": write_io, - } + try: + return { + "PID": proc.pid, + "Name": proc.name(), + "CPU": f"{proc.cpu_percent(models.MINIMUM_CPU_UPDATE_INTERVAL):.2f}%", + # Resident Set Size + "Memory": squire.size_converter(proc.memory_info().rss), + "Uptime": squire.format_timedelta( + timedelta(seconds=int(time.time() - proc.create_time())) + ), + "Threads": proc.num_threads(), + "Read I/O": read_io, + "Write I/O": write_io, + } + except psutil.NoSuchProcess as error: + LOGGER.debug(error) + return default(proc.name()) async def process_monitor(executor: ThreadPoolExecutor) -> List[Dict[str, str]]: @@ -75,7 +80,11 @@ async def process_monitor(executor: ThreadPoolExecutor) -> List[Dict[str, str]]: for proc in psutil.process_iter( ["pid", "name", "cpu_percent", "memory_info", "create_time"] ): - if any(name in proc.name() for name in models.env.processes): + # todo: Add a way to include processes (with default values) that don't exist but requested to monitor + if any( + name in proc.name() or name == str(proc.pid) + for name in models.env.processes + ): tasks.append(loop.run_in_executor(executor, get_process_info, proc)) return [await task for task in asyncio.as_completed(tasks)] diff --git a/pyninja/version.py b/pyninja/version.py index 89e9150..3dc1f76 100644 --- a/pyninja/version.py +++ b/pyninja/version.py @@ -1 +1 @@ -__version__ = "0.1.0-alpha" +__version__ = "0.1.0" diff --git a/release_notes.rst b/release_notes.rst index d49131a..b414af3 100644 --- a/release_notes.rst +++ b/release_notes.rst @@ -3,56 +3,79 @@ Release Notes v0.1.0 (09/29/2024) ------------------- -- Release `v0.1.0` +- Include `docker stats` in monitoring page +- **Full Changelog**: https://github.com/thevickypedia/PyNinja/compare/v0.0.9...v0.1.0 v0.1.0-alpha (09/16/2024) ------------------------- -- Release `v0.1.0-alpha` +- Alpha version for docker stats +- **Full Changelog**: https://github.com/thevickypedia/PyNinja/compare/v0.0.9...v0.1.0-alpha v0.0.9 (09/16/2024) ------------------- -- Release `v0.0.9` +- Includes disks information in the monitoring page +- Restructured monitoring page with dedicated div container for each category of system information +- **Full Changelog**: https://github.com/thevickypedia/PyNinja/compare/v0.0.8...v0.0.9 v0.0.8 (09/10/2024) ------------------- -- Release `v0.0.8` +- Includes an option to get CPU load average via API calls and monitoring page UI +- **Full Changelog**: https://github.com/thevickypedia/PyNinja/compare/v0.0.7...v0.0.8 v0.0.7 (09/09/2024) ------------------- -- Release `v0.0.7` +- Includes a new feature to monitor disk utilization and get process name +- Bug fix on uncaught errors during server shutdown +- **Full Changelog**: https://github.com/thevickypedia/PyNinja/compare/v0.0.6...v0.0.7 v0.0.6 (09/09/2024) ------------------- -- Release `v0.0.6` +- Includes an option to limit maximum number of WebSocket sessions +- Includes a logout functionality for the monitoring page +- Uses bearer auth for the monitoring page +- Redefines progress bars with newer color schemes +- **Full Changelog**: https://github.com/thevickypedia/PyNinja/compare/v0.0.5...v0.0.6 v0.0.6a (09/07/2024) -------------------- -- Release `v0.0.6a` +- Includes an option to limit max number of concurrent sessions for monitoring page +- **Full Changelog**: https://github.com/thevickypedia/PyNinja/compare/v0.0.5...v0.0.6a v0.0.5 (09/07/2024) ------------------- -- Release `v0.0.5` +- Packs an entirely new UI and authentication mechanism for monitoring tool +- Includes speed, stability and security improvements for monitoring feature +- Adds night mode option for monitoring UI +- **Full Changelog**: https://github.com/thevickypedia/PyNinja/compare/v0.0.4...v0.0.5 v0.0.4 (09/06/2024) ------------------- -- Include an option to monitor system resources via websockets +- Includes an option to monitor system resources via `WebSockets` +- **Full Changelog**: https://github.com/thevickypedia/PyNinja/compare/v0.0.3...v0.0.4 v0.0.3 (08/16/2024) ------------------- -- Release `v0.0.3` +- Allows env vars to be sourced from both ``env_file`` and ``kwargs`` +- **Full Changelog**: https://github.com/thevickypedia/PyNinja/compare/v0.0.2...v0.0.3 v0.0.2 (08/16/2024) ------------------- -- Release `v0.0.2` +- Includes added support for custom log configuration +- **Full Changelog**: https://github.com/thevickypedia/PyNinja/compare/v0.0.1...v0.0.2 v0.0.1 (08/11/2024) ------------------- -- Release `v0.0.1` +- Includes a process monitor and remote command execution functionality +- Security improvements including brute force protection and rate limiting +- Accepts ``JSON`` and ``YAML`` files for env config +- Supports custom worker count for ``uvicorn`` server +- Allows custom logging using ``logging.ini`` +- Includes an option to set the ``apikey`` via commandline +- **Full Changelog**: https://github.com/thevickypedia/PyNinja/compare/v0.0.0...v0.0.1 v0.0.0 (08/11/2024) ------------------- -- Implement concurrency for validating process health -- Update logger names across the module and README.md +- Release first stable version 0.0.0-a (08/10/2024) --------------------
    Container IDContainer NameCPU %Memory UsageMemory %Net I/OBlock I/O
    ${container.ID}${container.Name}${container.CPUPerc}${container.MemUsage}${container.MemPerc}${container.NetIO}${container.BlockIO}