Tags: AMD Radeon Instinct MI100, ATI, Bioinformatics, Biology, CUDA, Next-Generation sequencing, nVidia, nVidia GeForce RTX 3090, OpenCL, Package, Sequence alignment, Tesla A100
Tags: Biology, Computer science, Java, Next-Generation sequencing, nVidia, OpenCL, Package, R, Tesla K20
Tags: Algorithms, Bayesian, Bioinformatics, Biology, Filtering, Next-Generation sequencing, nVidia, nVidia GeForce GTX 580, nVidia GeForce GTX 780, OpenCL, Package, Thesis
Tags: Algorithms, Biology, CUDA, Databases, Next-Generation sequencing, nVidia, nVidia GeForce GTX 690, nVidia GeForce GTX 780, Package, Smith-Waterman algorithm
Tags: Bioinformatics, Biology, Computer science, CUDA, Next-Generation sequencing, nVidia, nVidia GeForce GTX 770
Tags: Bioinformatics, Biology, CUDA, Next-Generation sequencing, nVidia, Sequence alignment, Tesla K20
Tags: Algorithms, Bioinformatics, Biology, CUDA, Next-Generation sequencing, nVidia, Package, Tesla C2070, Tesla M2050
Tags: Algorithms, Bioinformatics, Biology, CUDA, Next-Generation sequencing, nVidia, nVidia GeForce GTX 480, Package, Python, Smith-Waterman algorithm
Tags: Algorithms, Biology, CUDA, Databases, Next-Generation sequencing, nVidia, Package, Tesla S1070
Tags: Bioinformatics, Biology, CUDA, Next-Generation sequencing, nVidia, nVidia GeForce GTX 280, Package
Tags: Bioinformatics, Biology, CUDA, MPI, Next-Generation sequencing, nVidia, OpenMP, Package, Tesla S1070
Recent source codes
Most viewed papers (last 30 days)
- SYCL-Bench 2020: Benchmarking SYCL 2020 on AMD, Intel, and NVIDIA GPUs
- Green AI: A Preliminary Empirical Study on Energy Consumption in DL Models Across Different Runtime Infrastructures
- Parallel programming in mobile devices with FancyJCL
- Spyx: A Library for Just-In-Time Compiled Optimization of Spiking Neural Networks
- Benchmarking and Dissecting the Nvidia Hopper GPU Architecture
- Using AI libraries for Incompressible Computational Fluid Dynamics
- FTTN: Feature-Targeted Testing for Numerical Properties of NVIDIA & AMD Matrix Accelerators
- Sustainable Supercomputing for AI: GPU Power Capping at HPC Scale
- APPy: Annotated Parallelism for Python on GPUs
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference