If one of the above commands fail your operating system probably lacks some build essentials. These are usually pre-installed but if you lack them you need to install these. For instance, for Ubuntu this would require:
The pipeline produces at various steps JSON QC files (*.json.gz). You can upload and interactively browse these files at https://gear-genomics.embl.de/alfred/. In addition, the pipeline produces a succinct QC file for each sample. If you have multiple output folders (one for each ATAC-Seq sample) you can simply concatenate the QC metrics of each sample.
To call differential peaks on a count matrix for TSS peaks, called counts.tss.gz, using DESeq2 we first need to create a file with sample level information (sample.info). For instance, if you have 2 replicates per condition:
Object pattern matching (opm) is similar to regular expressions. Instead of matching a string against a pattern, we match objects. Some programming languages have this feature built-in, like Rust:
let result = my_function();match result {Some(value) => do_this(value),
_ => do_that(),}
This is just a very simple example, but this a very powerful technique.
However, this feature is not available in python by default. This repository contains the fruits of my work to implement this feature in python.
Installation
Simply install this package with pip:
pip install --user pyopm
Usage
Note: Until now, only very basic features have been implemented.
frompyopmimportObjectPatternp=ObjectPattern({
'obj': {'eval': [lambdao: isinstance(o, dict)]},
'obj.keys': {'eval': [lambdak: all(isinstance(x, (int, str)) forxink())],
'bind': {'keys': lambdao: o()}},
'obj.values': {'bind': {'values': lambdao: o()}},
'obj.items': {'eval': [lambdai: all(isinstance(y, floatifisinstance(x, int) elseint)
forx, yini())],
'bind': {'items': lambdai: list(i())}},
})
m=p.match({0, 1, 2}) # not a dict -> m is Nonem=p.match({0: 0, 'one': 1}) # 0: 0 does not match the rules -> m is Nonem=p.match({0: 0.2, 'one': 1}) # match!withm: # magic: use the objects bound to the names specified aboveprint(keys)
print(values)
print(list(zip(keys, values))) # should be the same as...print(items) # ...this!
This snippet above results in the following output:
Currently, it is not very pythonic to use (especially the ObjectPattern init). I would be glad to improve the situation, but I do not want to loose the flexibility this method provides. If you have any ideas, please open an issue!
Diversity Techniques for riverside Wireless Sensor Nodes over Software Defined Radios
Project Description
This project will investigate the effectiveness of multiple antennas on wireless sensor nodes to improve radiofrequency reliability. Different diversity techniques will be investigated, and a suggestion as to which is the most suitable will be made. During the project, the student will model and measure a multiple input single output system. Firstly in a lab and secondly outdoors. The platform used will be software defined radios.
Project Aim
To measure the effectiveness of using multiple antennas on sensor nodes close to, and upon water using software defined radios.
Project Progress
Set up test
Carrier Wave detection Rx Target with Host interface
FM SISO
FM audio Tx Target (SISO)
FM audio Rx Target and Host
FM SIMO
FM audio Rx Target and Host SIMO
LoRa (beyond project aim – further progress)
LoRa Rx on Target and Host with fosphor (waterfall)
LoRa Tx on Target using Host generated IQ
Arduino code for LoRa32 v2 Tx and RX
LoRa Tx and Rx now works for targets and host – decoding not always successful
Experimental setup
Created three antenna mounts to test spatial and polarisation diversity
Data collected has also been included for completion
MATLAB code
Spectrum function that creates Power Density Dunction (PDF) to the input signal
SC,EGC,MRC – functions that perform Selection Combining, Maximal Ratio Combining and Equal Gain Combining to input signals
Read and write complex binary files – this is to read from GNU Radio file sinks – code was taken from gr-utils
Project Outcome
Based on the results obtained the optimal arrangment for a 2-branch SIMO system is an antenna separion of a full wavelength (434MHz=>70 cm) with no antenna polirisation. MRC seems to be the best combining technique with EGC being the second best and SC the worst.
The effects of spatial diversity were very clear through out the set of measurments but the effects of polarisation diversity were inclusive.
Usage instructions
The majority of the code developed for this project was done in GNU Radio companion and python. To use the above code, you will need to install GNU Radio it the Ettus USRP dependancies.
GNU Radio is a free and open-source software development toolkit that provides signal processing blocks to implement software radios. GNU Radio can be installed by following the instructions here [https://wiki.gnuradio.org/index.php/InstallingGR].
If you are looking to implement the code using hardware (SDRs), you will have to install the appropriate libraries. The above code was developed to be used on the Ettus E310 USRP using the UHD library from Ettus [https://files.ettus.com/manual/page_build_guide.html].
After installing the required dependencies (explained above), you can download or clone the repository to your host machine. The file structure is simple. Code for Targets (SDRs) can be found in the /Targets folder. Please note that in my case the Target/1 is my Rx and Target/2 is my Tx. The differences between the two folders should be minimal.
The /Host_Examples folder includes all the code written for the host computer. Host code will usually involve some kind of GUI for either real-time viewing of the data coming in or for controlling the setup or both. The GUI applications can be demanding, so my recommendation is to either use a Rasberry Pi or any other mid to high range computer running a recent version of Ubuntu. Using VMs can usually complicate things but haven’t tested it.
The /MATLAB folder includes the functions used to read, analyse and combine the IQ data obtained through the SDR. The functions are quite simple. They are fully document in the code. Please refer to that. Also note that the read and write_complex_binary functions were not developed by me. The come with a GNU Radio installation under gr-utils.
I am not sure if I will keep maintaining this after the end of the project but if any alterations are made they will be indicated here.
License
All code developed and shared above is developed by me, Kyprianos Diamantides (unless stated otherwise above).
Copyright (C) 2018 K. Diamantides
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published
by the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see https://www.gnu.org/licenses/.
Copy MVP with Respositary Pattern folder in your Android Studio installation in this folder: <androidStudio-folder>/plugins/android/lib/templates/activities
Restart Android Studio, and you will find it in: New -> Activity -> MVP with Respositary Pattern
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the “Software”), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Copy MVP with Respositary Pattern folder in your Android Studio installation in this folder: <androidStudio-folder>/plugins/android/lib/templates/activities
Restart Android Studio, and you will find it in: New -> Activity -> MVP with Respositary Pattern
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the “Software”), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Copyright (C) 2015-2024, Bayerische Motoren Werke Aktiengesellschaft (BMW AG)
License
This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0.
If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/.
Contributing Guidelines
For comprehensive details on how to contribute effectively to the project, please refer to our CONTRIBUTING.md file.
vSomeIP Overview
The vSomeIP stack implements the http://some-ip.com/ (Scalable service-Oriented MiddlewarE over IP (SOME/IP)) Protocol.
The stack consists out of:
a shared library for SOME/IP (libvsomeip3.so)
a shared library for SOME/IP’s configuration module (libvsomeip3-cfg.so)
a shared library for SOME/IP’s service discovery (libvsomeip3-sd.so)
a shared library for SOME/IP’s E2E protection module (libvsomeip3-e2e.so)
Optional:
a shared library for compatibility with vsomeip v2 (libvsomeip.so)
The default configuration file is /etc/vsomeip.json.
Compilation with signal handling
To compile vSomeIP with signal handling (SIGINT/SIGTERM) enabled, call cmake like:
cmake -DENABLE_SIGNAL_HANDLING=1 ..
In the default setting, the application has to take care of shutting down vSomeIP in case these signals are received.
Note on Ubuntu 24.04 Build Issues
If you encounter build issues on Ubuntu 24.04, consider using Ubuntu 22.04 as a temporary fix. This is due to the ongoing transition of the GitHub Actions runner to Ubuntu 24.04, which may cause compatibility issues.
Build Instructions for Android
Dependencies
vSomeIP uses Boost >= 1.66. The boost libraries (system, thread and log) must be included in the Android source tree and integrated into the build process with an appropriate Android.bp file.
To integrate the vSomeIP library into the build process, the source code together with the Android.bp file has to be inserted into the Android source tree (by simply copying or by fetching with a custom platform manifest).
When building the Android source tree, the Android.bp file is automatically found and considered by the build system.
In order that the vSomeIP library is also included in the Android image, the library has to be added to the PRODUCT_PACKAGES variable in one of a device/target specific makefile:
O Ecoleta é uma aplicação Web e Mobile para ajudar pessoas a encontrarem pontos de coleta para reciclagem.
Essa aplicação foi construída na trilha Booster da Next Level Week distribuída pela Rocketseat. A ideia de criar uma aplicação voltada ao meio ambiente surgiu da coincidência da data do curso e a data da semana do meio ambiente
📚 Documentação
Para reforçar alguns conceitos e registrar comandos que são dificeis de se lembrar eu fiz uma pequena DOCUMENTAÇÃO para ajudar quem esta iniciando com TypeScript, Node, ReactJS e React Native.
🚀 Tecnologias Utilizadas
O projeto foi desenvolvido utilizando as seguintes tecnologias
# Instale as dependências
$ npm install
## Crie o banco de dados
$ cd server
$ npm run knex:migrate
$ npm run knex:seed
# Inicie a API
$ npm run dev
# Inicie a aplicação web
$ cd web
$ npm start
# Inicie a aplicação mobile
$ cd mobile
$ npm start
♻️ Como contribuir
Faça um Fork desse repositório,
Crie uma branch com a sua feature: git checkout -b my-feature
Commit suas mudanças: git commit -m 'feat: My new feature'
Push a sua branch: git push origin my-feature
🎓 Quem ministrou?
As aulas foram ministradas pelo mestre Diego Fernandes nas aulas da Next Level Week.
📝 License
Esse projeto está sob a licença MIT. Veja o arquivo LICENSE para mais detalhes.
/api_deployment – The openshift repo (we just copy the sources from api to this directory and push it up to openshift)
/api/server/website – Static files of the website at-one-go.com
/api/server/website/app – The WebApp will be served from here (optimized sources from /dist will be copied to this directory via make webapp)
/api/server/test – All backend tests
/test – All frontend tests
/dist – Created via Grunt. The optimized sources will be used in the phonegap app and the webapp
/mobile/ – Phonegap project directory (v3.x)
/docs – All software documentation
Files
Makefile – The Makefile for everything
/app/index.html – The base html file for the phonegap app (goes to /mobile/ios/www/ or /mobile/android/assets/www/ via Makefile)
/app/webapp.html – The base html file for the web app (goes to api/server/website/app/ via Makefile)
Important client side JavaScript files
/app/scripts/config.js – The RequireJS config file for development (see also /app/scripts/config.production.js)
/app/scripts/main.js – The main bootstrapper, all initial event handling (domready/deviceready, global click/touch handlers, global ajax config…)
/app/scripts/router.js – The AppRouter, all client side navigation is done via history api (pushstate is on phonegap apps not needed). All routes of the app are defined here, and the router takes care of the rendering of root-views (screens)
Local installation
1. The App
$ cd path/to/your/projects
$ git clone repo-url.git atonego
$ cd atonego
# install local build system using grunt [optional]
$ npm install
# NOTE: The folder `atonego` should be served via a locally installed webserver like apache
$ open http://127.0.0.1/atonego # should serve index.html now
Via phonegap
$ make ios_build_dev && clear && t mobile/ios/cordova/console.log
Checkout the Makefile for more information.
2. The API
A RESTful API for the app is written in JavaScript using Node.js, the website at-one-go.com and the webapp will be served through Node.js too.
NOTE: The production config file api/server/config/environments/production.json is not under version control.
$ cd api
$ npm install # install dependencies only once
$ mongod & # start mongodb if not already running
$ node server.js # start the node app in development mode
CodeCoverage /app/scripts via Blanket.js, see Tests in the Browser via testem.
$ cd project_root
$ testem # default mode - Browsers can run the testsuite at http://localhost:7357
$ testem ci # ci mode - run the suite in all available browsers
Execute the tests in a simulator or on a device:
$ make test_build # copies `/app` und `/test` in mobile's `www` directories
# after that, the file config.xml (ios/android) has to be edited:
<content src="https://github.com/mwager/test/index_browser.html" />
Then just run:
$ make ios # build the phonegap app with the current content of `mobile/ios/www`
$ make android # same for `mobile/android/assets/www`...
# shortcuts (config.xml!)
$ make test_build && make ios
$ make test_build && make android
We use HTTP Basic auth over SSL everywhere. On login (or signup), a secret API TOKEN gets generated from the user’s ID and a random string. This token will be encrypted via AES and sent over to the client. As RESTful APIs should be stateless, each following request must include this token in the password field of the Authorization-Header to authenticate against the service.
The App and the API/website were developed with support for multilingualism. The following directories include all Texts:
/app/scripts/libs/locales -> Texts of the App
/api/server/locales -> Texts of the Website & API
Error Handling
Models are always returning the Error as the first parameter in the callback (Node.js-Style, null on success), the second parameter can be used as return value, e.g. callback(null, user)).
Example
// in a model-method e.g. todolist.fetchList()
if (!list) {
utils.handleError('damnit some error message'); // optional logging
return callback({key: 'listNotFound'});
}
return callback(err);
// later: (e.g. in controllers)
if(err && err.key) {
// Error already logged (Log-files)
// Usage of key via `__(key)`
var text = __(err.key);
// `text` is now smt like "List not found" or
// "Liste nicht gefunden" (e.g. based on current request)
return displayErrToUserSomehow(text);
}
Deployment
Be sure to check out the Makefile for more infos.
Deployment of the App via Phonegap
Via PhoneGap for iOS [and Android]. There is a Makefile for automating tasks
like optimizing the sources or compiling the native Apps via Phonegap.
We generate one optimized JavaScript file (aog.js) via the requirejs optimizer, which will look something like this.
$ make clean # clean build directories
$ make ios_device # optimize sources, copy to ios `www` and build
$ make ios # build for ios via phonegap cli tools
$ make android # build for android via phonegap cli tools
# NOTE: Shortcuts for running and logging: (phonegap catches "console.log()" calls)
# We want a clean log file: (running in simulator)
# 1. iOS
$ echo "" > mobile/ios/cordova/console.log && make ios_build && clear && t mobile/ios/cordova/console.log
# 2- Android
# be sure to connect a real device before running the following one, else the android simulator could screw up your system (-;
$ make android_build && make android_run && clear && adb logcat | grep "Cordova"
We use git tags for versioning. However, the file mobile/www/config.xml [and api/package.json and AndroidManifest.xml] should be manually updated on releases.
Deployment of the API
The API has its own repository at openshift. (URL: atonego-mwager.rhcloud.com) We are using a “Node.js-Catridge”, default Node-Version is 0.6.x (May 2013), but of course we want a newer version of Node.js, so we also set up this:
NOTE: this requires additional files (see /.gitignore).
# 1. This command will optimize the sources using grunt and copy the generated stuff from `/dist/` to `/api/server/website/app/`:
$ make webapp
# 2. This command copies the sources from `api/*` over to `api_deployment/`
# and pushes the stuff from there up to the openshift server.
$ make api_deploy
# restart from cli:
$ make api_restart
Openshift’s CLI Tool “rhc”
# install: (needs ruby)
$ gem install rhc
> rhc app start|stop|restart -a {appName}
> rhc cartridge start|stop|restart -a {appName} -c mysql-5.1
When you do a git push, the app and cartridges do get restarted. It is best if you can find any indication of why it stopped via your log files. Feel free to post those if you need any further assistance.
You can access your logs via ssh:
> ssh $UUID@$appURL (use "rhc domain show" to find your app's UUID/appURL
> cd ~/mysql-5.1/log (log dir for mysql)
> cd ~/$OPENSHIFT_APP_NAME/logs (log dir for your app)
# RESTART DATABASE ONLY:
$ rhc cartridge start -a atonego -c rockmongo-1.1
# RESTART APP ONLY:
$ rhc app start -a atonego
$ ssh ...
$ du -h * | sort -rh | head -50
$ rm -rf mongodb-2.2/log/mongodb.log*
$ rm -rf rockmongo-1.1/logs/*
$ echo "" > nodejs/logs/node.log
# and remove the mongodb journal files:
$ rm -rf mongodb/data/journal/*
Locally:
rhc app tidy atonego
openshift
Cronjob:
On the Production-Server at openshift, there runs a minutely cronjob, checking all todos with notifications, so Email-, PUSH- and SocketIO-messages can be sent to notify users.
$ mkdir mobile && cd mobile
$ alias create="/path/to/phonegap-2.7.0/lib/ios/bin/create"
$ create ios de.mwager.atonego AtOneGo
$ alias create="/path/to/phonegap-2.7.0/lib/android/bin/create"
$ create android de.mwager.atonego AtOneGo
Avoid underscores in files and folders because Phonegap may fail to load the
contained files in Android. This is a known issue.
Edits
Zepto, Backbone
In a mobile environment like phonegap, memory management will be much more important than in the web. Variables in the global namespace are not cleaned up by the js engine’s garbage collector, so keeping our variables as local as possible is a must.
To avoid polluting the global namespace as much as possible, Zepto, Backbone and some other files in /app/scripts/libs were edited, see “atonego”.
Libs which are still global:
window._ -> /app/scripts/libs/lodash.js
window.io -> /app/scripts/libs/socket.io.js [not used anymore]
Zepto errors
Zepto’s touch module was edited to prevent lots of strange errors like:
TypeError: 'undefined' is not an object file:///var/mobile/Applications/XXXXXXXXXX/atonego.app/www/scripts/libs/zepto.js on line 1651
Search /app/scripts/lib/zepto.js for “atonego”.
iOS does not allow HTTP Requests against self-signed/invalid certs…
On iOS devices, there were problemes with the api endpoint at https://atonego-mwager.rhcloud.com/api. The following workaround is necessary!
The file /mobile/ios/atonego/Classes/AppDelegate.m was edited: (at the bottom)
Some inspiration for rendering Backbone views with nice animations: junior.js project
Disabled attributes & the tap event
The “disabled state” (“<button … disabled …/>”) will not be captured. So on every “tap” we must check if the element has an disabled attribute or class.
Ghostclicks
This is one of the most annoying problems I have ever had. Checkout main.js and router.js for some workarounds.
UPDATE: We do not use socket io anymore as its kind of senseless having an open connection in a todo-app. PUSH notifications should be enough. If you want to use websockets via phonegap on iOS, better do not use socketio version 0.9.
The app always crashed on resume after a (little) longer run. I was about to give up, then I found this
SocketIO’s “auto reconnect” somehow crashed the app on the iOS test devices (seg fault!). As a workaround I disconnect the websocket connection on Phonegap’s pause-event, and manually re-connect (using same socket/connection again) on the resume-event.
Create the *.IPA file via XCode: check “iOS device”, then: Product > Archive
Install the app on a real iOS device via xCode
Create provisioning profile, download, copy profile via organizer to device , (dblclick installs in xcode first)
XCode: update codesigning identity according to the downloaded provisioning profile (project AND target)
The cronjob
A minutely cronjob runs on the server, checking all todos which are due now, so we can notify users. However, I could not figure out a good solution to re-use my existing (running) node app for this. The current workaround is to listen for a POST to a specific URL, and POSTing to that URL via curl from the cronjob with some pseudo credentials set to “make sure” that the request came from the shell script, not from outside )-:
Search /api/worker.js -> “cron”
Console testing
Open the app in chrome or safari (dev or live), then open the dev console and put in some of the following commands to play with the app:
# as there is (almost) nothing global, we must require stuff first, use this as template:
> var $ = require('zepto'), app = require('app'), common = require('common');
# then try some of these (-:
> app.VERSION
> app.isMobile
> app.changeLang('de') // or app.changeLang('en') if currently in german
> app.router.go('help')
> window.history.back()
> var list = app.todolists.get('object id of list from url or app.todolists');
> list.toJSON()
// URL: #todolists
list.set('title', 'hello world')
> app.fetchUser() // watch network tab
var list = app.todolists.get('get id hash from url');
list.set('title', '');
Finally, make sure that Python path is correctly set. The commmand
which python
should display the path to the Anaconda’s environment Python path, e.g., /opt/anaconda3/envs/StrainNet/bin/python
Downloading pre-trained models and data
To download the data and pretrained models for this project, you can use the download.sh script. This script will download the data and models from a remote server and save them to your local machine.
Warning: The data is approximately 15 GB in size and may take some time to download.
To download the data and models, run the following command:
. scripts/download.sh
This will download the data and models and save them to the current working directory. See the datasets for all of the ultrasound images (both synthetic and experimentally collected) and see the models folder for the pre-trained StrainNet models.
Demo: Applying StrainNet to a Synthetic Test Case
To see a demo of StrainNet in action, you can apply the model to a synthetic test case. The synthetic test case is a simulated image with known strains that can be used to test the accuracy of the model.
To apply StrainNet to the synthetic test case, use the following command:
. scripts/demo.sh
You should now see a results folder with some plots of the performance on a synthetic test case where the largest strain is $4%$ (see the 04DEF in StrainNet/datasets/SyntheticTestCases/04DEF).
After generating a training, StrainNet can be trained. To train StrainNet, you will need to run the train.py script. This script can be invoked from the command line, and there are several optional arguments that you can use to customize the training process.
Here is an example command for training StrainNet with the default settings:
python train.py
You can also adjust the training settings by specifying command-line arguments. For example, to change the optimizer and learning rate, you can use the following command:
python train.py --optimizer SGD --lr 0.01
Arguments
Below is a list of some of the available command-line arguments that you can use to customize the training process:
Argument
Default
Description
--optimizer
Adam
The optimizer to use for training.
--lr
0.001
The learning rate to use for the optimizer.
--batch_size
8
The batch size to use for training.
--epochs
100
The number of epochs to train for.
--train_all
False
Whether to train all of the models.
For a complete list of available command-line arguments and their descriptions, you can use the --help flag:
By default, train.py will only train one of the four models needed for StrainNet. To train all the models needed for StrainNet, you can use the train.sh script. This script will invoke the necessary training scripts and pass the appropriate arguments to them.
To run the train.sh script, simply execute the following command from the terminal:
. scripts/train.sh
Viewing the progress of your training with Tensorboard
By default, running train.py will write an events.out file to visualize the progress of training StrainNet with Tensorboard. After running train.py, locate the events.out in the newly-created runs folder.
Viewing the Tensorboard Webpage
To view the Tensorboard webpage, you will need to start a Tensorboard server. You can do this by running the following command in the terminal:
Replace "path/to/dir/containing/events.out" with a path to a folder containing events.out file(s) (e.g., runs). This will start a Tensorboard server and print a message with a URL that you can use to access the Tensorboard webpage.
To view the Tensorboard webpage, open a web browser and navigate to the URL printed by the Tensorboard server. This will open the Tensorboard webpage, which allows you to view various training metrics and graphs.
To view the Tensorboard events.out file in Visual Studio Code, you may use the Tensorboard command.
Open the command palette (View → Command Palette… or Cmd + Shift + P on macOS)
Type “Python: Launch Tensorboard” in the command palette and press Enter.
Select Select another folder and select the runs folder to view events.out file(s).
Evaluating the performance of StrainNet
After training the model, you can evaluate its performance on a test dataset to see how well it generalizes to unseen data. To evaluate the model, you will need to have a test dataset in a format that the model can process.
To evaluate the model, you can use the eval.py script. This script loads the trained model and the test dataset, and runs the model on the test data to compute evaluation metrics such as accuracy and precision.
To run the eval.py script, use the following command:
To apply the pretrained models to the synthetic test cases, you can use the eval.sh script. This script will invoke the necessary evaluation scripts and pass the appropriate arguments to them.
To run the eval.sh script, simply execute the following command from the terminal:
. scripts/eval.sh
Citation
@article{huff2024strainnet,
title={Deep learning enables accurate soft tissue tendon deformation estimation in vivo via ultrasound imaging},
author={Huff, Reece D and Houghton, Frederick and Earl, Conner C and Ghajar-Rahimi, Elnaz and Dogra, Ishan and Yu, Denny and Harris-Adamson, Carisa and Goergen, Craig J and O’Connell, Grace D},
journal={Scientific Reports},
volume={14},
number={1},
pages={18401},
year={2024},
publisher={Nature Publishing Group UK London}
}
LICENSE
This project is licensed under the MIT License – see the LICENSE file for details.
AngleSharp.XPath extends AngleSharp with the ability to select nodes via XPath queries instead of CSS selector syntax. This is more powerful and potentially more common for .NET developers working with XML on a daily basis.
Basic Use
With this library using XPath queries is as simple as writing
Besides SelectSingleNode we can also use SelectNodes. Both are extension methods defined in the AngleSharp.XPath namespace.
If wanted we can also use XPath directly in CSS selectors such as in QuerySelector or QuerySelectorAll calls. For this we only need to apply the following configuration:
It is important that the original selector has all elements (*) as the intersection of the ordinary CSS selector and the XPath attribute is considered. The XPath attribute consists of a head (xpath>) and a value – provided as a string, e.g., //li[2].
Features
Uses XPathNavigator from System.Xml.XPath
Includes XPath capabilities to CSS query selectors if wanted
Participating
Participation in the project is highly welcome. For this project the same rules as for the AngleSharp core project may be applied.
If you have any question, concern, or spot an issue then please report it before opening a pull request. An initial discussion is appreciated regardless of the nature of the problem.
Live discussions can take place in our Gitter chat, which supports using GitHub accounts.
This project has adopted the code of conduct defined by the Contributor Covenant to clarify expected behavior in our community.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.