Blog

  • assemblyqc

    plant-food-research-open/assemblyqc

    GitHub Actions CI Status GitHub Actions Linting StatusCite Article nf-test

    Nextflow run with conda ❌ run with docker run with singularity Launch on Seqera Platform

    Introduction

    plant-food-research-open/assemblyqc is a Nextflow pipeline which evaluates assembly quality with multiple QC tools and presents the results in a unified html report. The tools are shown in the Pipeline Flowchart and their references are listed in CITATIONS.md. The pipeline includes skip flags to disable execution of various tools.

    Pipeline Flowchart

    Usage

    Refer to usage, parameters and output documents for details.

    Note

    If you are new to Nextflow and nf-core, please refer to this page on how to set-up Nextflow. Make sure to test your setup with -profile test before running the workflow on actual data.

    Prepare an assemblysheet.csv file with following columns representing target assemblies and associated meta-data.

    • tag: A unique tag which represents the target assembly throughout the pipeline and in the final report
    • fasta: FASTA file

    Now, you can run the pipeline using:

    nextflow run plant-food-research-open/assemblyqc \
      -revision <version> \
      -profile <docker/singularity/.../institute> \
      --input assemblysheet.csv \
      --outdir <OUTDIR>

    Warning

    Please provide pipeline parameters via the CLI or Nextflow -params-file option. Custom config files including those provided by the -c Nextflow option can be used to provide any configuration except for parameters; see docs.

    Plant&Food Users

    Download the pipeline to your /workspace/$USER folder. Change the parameters defined in the pfr/params.json file. Submit the pipeline to SLURM for execution.

    sbatch ./pfr_assemblyqc

    Credits

    plant-food-research-open/assemblyqc was originally written by Usman Rashid (@gallvp) and Ken Smith (@hzlnutspread).

    Ross Crowhurst (@rosscrowhurst), Chen Wu (@christinawu2008) and Marcus Davy (@mdavy86) generously contributed their QC scripts.

    Mahesh Binzer-Panchal (@mahesh-panchal) and Simon Pearce (@SPPearce) helped port the pipeline modules and sub-workflows to nf-core schema.

    We thank the following people for their extensive assistance in the development of this pipeline:

    The pipeline uses nf-core modules contributed by following authors:

    Contributions and Support

    If you would like to contribute to this pipeline, please see the contributing guidelines.

    Citations

    If you use plant-food-research-open/assemblyqc for your analysis, please cite it as:

    AssemblyQC: A Nextflow pipeline for reproducible reporting of assembly quality.

    Usman Rashid, Chen Wu, Jason Shiller, Ken Smith, Ross Crowhurst, Marcus Davy, Ting-Hsuan Chen, Ignacio Carvajal, Sarah Bailey, Susan Thomson & Cecilia H Deng.

    Bioinformatics. 2024 July 30. doi: 10.1093/bioinformatics/btae477.

    An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md file.

    This pipeline uses code and infrastructure developed and maintained by the nf-core community, reused here under the MIT license.

    The nf-core framework for community-curated bioinformatics pipelines.

    Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.

    Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x.

    Visit original content creator repository https://github.com/Plant-Food-Research-Open/assemblyqc
  • VRS-Custom-links

    VRS-Custom-links 🛩️

    Custom links for Virtual Radar Server (a.k.a. VRS). This plug-in add new links to the Detail panel, that may help identify new aircraft or find pictures to the existing ones.

    Prerequisites

    • VRS installed and running
    • VRS Custom Content Plugin installed and enabled.

    Instructions

    • Clone or download the repo into a directory on the machine where VRS is running. Ensure you do not place the files under the Virtual Radar Server directory, since they could be overwritten on upgrades.
    • Open a text editor and modify the file “CustomLink.js” so that the first line of code begins with <script> and the last line ends with </script>, and save the file.
    • Configure the VRS Custom Content Plugin to add the “CustomLink.js” file into the END of HEAD portion of the pages, with an ” * ” (without quotes and spaces) on the Address field, so it will populate the links on all pages (including reports).
    • Enjoy!

    Acknowledgments

    This project was only possible thanks to the invaluable help of many individuals and communities, especially the creator of the VRS, Andrew Whewell, always solicitous in his forum; Andrew Hill, whose flights.hillhome.org site inspired me deeply; and all of the ADS-B Brasil community, including Ramon Martins and Jaime Hempke, which together maintain the excellent site TrafegoAereo.com.

    Contributions

    Feel free to download and share these files, suggest corrections, or send requests for more aviation links, as I’m constantly updating this repository with new usefull resources.

    Other Projects

    VRS Operator Flags

    VRS Aircraft Markers

    VRS Country Flags

    VRS Silhouettes

    Visit original content creator repository https://github.com/dedevillela/VRS-Custom-links
  • VRS-Custom-links

    VRS-Custom-links 🛩️

    Custom links for Virtual Radar Server (a.k.a. VRS). This plug-in add new links to the Detail panel, that may help identify new aircraft or find pictures to the existing ones.

    Prerequisites

    • VRS installed and running
    • VRS Custom Content Plugin installed and enabled.

    Instructions

    • Clone or download the repo into a directory on the machine where VRS is running. Ensure you do not place the files under the Virtual Radar Server directory, since they could be overwritten on upgrades.
    • Open a text editor and modify the file “CustomLink.js” so that the first line of code begins with <script> and the last line ends with </script>, and save the file.
    • Configure the VRS Custom Content Plugin to add the “CustomLink.js” file into the END of HEAD portion of the pages, with an ” * ” (without quotes and spaces) on the Address field, so it will populate the links on all pages (including reports).
    • Enjoy!

    Acknowledgments

    This project was only possible thanks to the invaluable help of many individuals and communities, especially the creator of the VRS, Andrew Whewell, always solicitous in his forum; Andrew Hill, whose flights.hillhome.org site inspired me deeply; and all of the ADS-B Brasil community, including Ramon Martins and Jaime Hempke, which together maintain the excellent site TrafegoAereo.com.

    Contributions

    Feel free to download and share these files, suggest corrections, or send requests for more aviation links, as I’m constantly updating this repository with new usefull resources.

    Other Projects

    VRS Operator Flags

    VRS Aircraft Markers

    VRS Country Flags

    VRS Silhouettes

    Visit original content creator repository https://github.com/dedevillela/VRS-Custom-links
  • dotfiles

    dotfiles Build Status

    My dotfiles

    This dotfiles repo is what I use to setup my systems.

    Setup

    • Clone this repo and run setup.sh script
    • Start feeling the awesomeness

    Whats in here

    • vim configurations that I use along with SpaceVim and Neovim
    • tmux configuration
    • bash aliases
    • bash prompt based on this
    • global gitignore and my git configuration
    • global editorconfig
    • httpie configuration
    • my bash functions
    • ssh config
    • Brewfile (run brew bundle install)
    • rest of the awesomeness that I might not have remembered to document here

    Reinstall vim

    • If you wish to re-install/upgrade spacevim, you can set FORCE_SPACEVIMINSTALL to some value and this will enforce installation of vim stuff while running setup.sh even if vim is already configured.
    FORCE_SPACEVIMINSTALL=yup ./setup.sh

    Notes

    • For git diff, I’m using diff-so-fancy so make sure you have if you use this .gitconfig
    • Put your personal information for ssh on ~/.ssh/config.local. This requires OpenSSH >= 7.3. See Install Instruction for OpenSSH7.4 on Ubuntu 16.04
    • Put your private bash aliases on ~/.bash_aliases_secret.
    • You can update crontab file and then run: crontab crontab to reload the crons.
    • If you wish you to update crontab from your own crontab, you can run: crontab -l > crontab in this repo.

    Directory Structure

          1 .
          2 ├── .custom-files
          3 │   └── eye_inv.ico
          4 ├── .functions
          5 │   ├── codepoint
          6 │   ├── colors
          7 │   ├── extract
          8 │   ├── gitignore
          9 │   ├── gitpwn
         10 │   ├── gogo
         11 │   ├── golistdeps
         12 │   ├── gostatic
         13 │   ├── hccopy
         14 │   ├── heroku-copy
         15 │   ├── man
         16 │   ├── mdview
         17 │   ├── msgerr
         18 │   ├── pylatest
         19 │   ├── sslcert
         20 │   ├── tmuxinator.bash
         21 │   └── tre
         22 ├── httpie
         23 │   └── config.json
         24 ├── nvim
         25 │   ├── init-my.vim
         26 │   ├── init.vim
         27 │   ├── pyenv-setup.sh
         28 │   └── pyvenv-setup.sh
         29 ├── scripts
         30 │   ├── diff-highlight
         31 │   └── diff-so-fancy
         32 ├── .ackrc
         33 ├── .agignore
         34 ├── .bash_aliases
         35 ├── .bash_prompt
         36 ├── .bashrc.defaults
         37 ├── .ctags
         38 ├── curl-timing.txt
         39 ├── .editorconfig
         40 ├── .gemrc
         41 ├── .gitconfig
         42 ├── .gitignore
         43 ├── .globalrc
         44 ├── .iex.exs
         45 ├── .iftoprc
         46 ├── LICENSE
         47 ├── .mpd.conf
         48 ├── .psqlrc
         49 ├── .pythonrc.py
         50 ├── README.md
         51 ├── setup.sh
         52 ├── ssh_config
         53 ├── tags
         54 ├── .tern-config
         55 ├── .tigrc
         56 ├── .tmux.conf
         57 └── .travis.yml
         58
         59 5 directories, 51 files
    Visit original content creator repository https://github.com/techgaun/dotfiles
  • rag-evaluator

    RAG Evaluator

    Overview

    RAG Evaluator is a Python library for evaluating Retrieval-Augmented Generation (RAG) systems. It provides various metrics to evaluate the quality of generated text against reference text.

    Installation

    You can install the library using pip:

    pip install rag-evaluator

    Usage

    Here’s how to use the RAG Evaluator library:

    from rag_evaluator import RAGEvaluator
    
    # Initialize the evaluator
    evaluator = RAGEvaluator()
    
    # Input data
    question = "What are the causes of climate change?"
    response = "Climate change is caused by human activities."
    reference = "Human activities such as burning fossil fuels cause climate change."
    
    # Evaluate the response
    metrics = evaluator.evaluate_all(question, response, reference)
    
    # Print the results
    print(metrics)

    Streamlit Web App

    To run the web app:

    1. cd into streamlit app folder.
    2. Create a virtual env
    3. Activate the virtual env
    4. Install all dependencies
    5. Run the app:
    streamlit run app.py
    

    Metrics

    The RAG Evaluator provides the following metrics:

    1. BLEU (0-100): Measures the overlap between the generated output and reference text based on n-grams.

      • 0-20: Low similarity, 20-40: Medium-low, 40-60: Medium, 60-80: High, 80-100: Very high
    2. ROUGE-1 (0-1): Measures the overlap of unigrams between the generated output and reference text.

      • 0.0-0.2: Poor overlap, 0.2-0.4: Fair, 0.4-0.6: Good, 0.6-0.8: Very good, 0.8-1.0: Excellent
    3. BERT Score (0-1): Evaluates the semantic similarity using BERT embeddings (Precision, Recall, F1).

      • 0.0-0.5: Low similarity, 0.5-0.7: Moderate, 0.7-0.8: Good, 0.8-0.9: High, 0.9-1.0: Very high
    4. Perplexity (1 to ∞, lower is better): Measures how well a language model predicts the text.

      • 1-10: Excellent, 10-50: Good, 50-100: Moderate, 100+: High (potentially nonsensical)
    5. Diversity (0-1): Measures the uniqueness of bigrams in the generated output.

      • 0.0-0.2: Very low, 0.2-0.4: Low, 0.4-0.6: Moderate, 0.6-0.8: High, 0.8-1.0: Very high
    6. Racial Bias (0-1): Detects the presence of biased language in the generated output.

      • 0.0-0.2: Low probability, 0.2-0.4: Moderate, 0.4-0.6: High, 0.6-0.8: Very high, 0.8-1.0: Extreme
    7. MAUVE (0-1): MAUVE captures contextual meaning, coherence, and fluency while measuring both semantic similarity and stylistic alignment .

      • 0.0-0.2 (Poor), 0.2-0.4 (Fair), 0.4-0.6 (Good), 0.6-0.8 (Very good), 0.8-1.0 (Excellent).
    8. METEOR (0-1): Calculates semantic similarity considering synonyms and paraphrases.

      • 0.0-0.2: Poor, 0.2-0.4: Fair, 0.4-0.6: Good, 0.6-0.8: Very good, 0.8-1.0: Excellent
    9. CHRF (0-1): Computes Character n-gram F-score for fine-grained text similarity.

      • 0.0-0.2: Low, 0.2-0.4: Moderate, 0.4-0.6: Good, 0.6-0.8: High, 0.8-1.0: Very high
    10. Flesch Reading Ease (0-100): Assesses text readability.

    • 0-30: Very difficult, 30-50: Difficult, 50-60: Fairly difficult, 60-70: Standard, 70-80: Fairly easy, 80-90: Easy, 90-100: Very easy
    1. Flesch-Kincaid Grade (0-18+): Indicates the U.S. school grade level needed to understand the text.
      • 1-6: Elementary, 7-8: Middle school, 9-12: High school, 13+: College level

    Testing

    To run the tests, use the following command:

    python -m unittest discover -s rag_evaluator -p "test_*.py"
    

    License

    This project is licensed under the MIT License. See the LICENSE file for details.

    Contributing

    Contributions are welcome! If you have any improvements, suggestions, or bug fixes, feel free to create a pull request (PR) or open an issue on GitHub. Please ensure your contributions adhere to the project’s coding standards and include appropriate tests.

    How to Contribute

    1. Fork the repository.
    2. Create a new branch for your feature or bug fix.
    3. Make your changes.
    4. Run tests to ensure everything is working.
    5. Commit your changes and push to your fork.
    6. Create a pull request (PR) with a detailed description of your changes.

    Contact

    If you have any questions or need further assistance, feel free to reach out via email.

    Visit original content creator repository
    https://github.com/AIAnytime/rag-evaluator

  • focus_mcp_data

    FOCUS DATA MCP Server [中文]

    A Model Context Protocol (MCP) server enables artificial intelligence assistants to directly query data results. Users can obtain data results from DataFocus using natural language.

    Features

    • Register on DataFocus to open an application space, and import (directly connect to) the data tables to be analyzed.
    • Select Datafocus data table initialization dialogue
    • Natural language data acquisition results

    Prerequisites

    • jdk 23 or higher. Download jdk
    • gradle 8.12 or higher. Download gradle
    • register Datafocus to obtain bearer token:
      1. Register an account in Datafocus
      2. Create an application
      3. Enter the application
      4. Admin -> Interface authentication -> Bearer Token -> New Bearer Token bearer token

    Installation

    1. Clone this repository:
    git clone https://github.com/FocusSearch/focus_mcp_data.git
    cd focus_mcp_data
    1. Build the server:
    gradle clean
    gradle bootJar
    
    The jar path: build/libs/focus_mcp_data.jar

    MCP Configuration

    Add the server to your MCP settings file (usually located at ~/AppData/Roaming/Code/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json):

    {
      "mcpServers": {
        "focus_mcp_data": {
          "command": "java",
          "args": [
            "-jar",
            "path/to/focus_mcp_data/focus_mcp_data.jar"
          ],
          "autoApprove": [
            "tableList",
            "gptText2DataInit",
            "gptText2DataData"
          ]
        }
      }
    }

    Available Tools

    1. tableList

    Get table list in datafocus.

    Parameters:

    • name (optional): table name to filter
    • bearer (required): bearer token

    Example:

    {
      "name": "test",
      "bearer": "ZTllYzAzZjM2YzA3NDA0ZGE3ZjguNDJhNDjNGU4NzkyYjY1OTY0YzUxYWU5NmU="
    }

    2. gptText2DataInit

    Initialize dialogue.

    Parameters:

    • names (required): selected table names
    • bearer (required): bearer token
    • language (optional): language [‘english’,’chinese’]

    Example:

    {
      "names": [
        "test1",
        "test2"
      ],
      "bearer": "ZTllYzAzZjM2YzA3NDA0ZGE3ZjguNDJhNDjNGU4NzkyYjY1OTY0YzUxYWU5NmU="
    }

    3. gptText2DataData

    Query data results.

    Parameters:

    • chatId (required): chat id
    • input (required): Natural language
    • bearer (required): bearer token

    Example:

    {
      "chatId": "03975af5de4b4562938a985403f206d4",
      "input": "max(age)",
      "bearer": "ZTllYzAzZjM2YzA3NDA0ZGE3ZjguNDJhNDjNGU4NzkyYjY1OTY0YzUxYWU5NmU="
    }

    Response Format

    All tools return responses in the following format:

    {
      "errCode": 0,
      "exception": "",
      "msgParams": null,
      "promptMsg": null,
      "success": true,
      "data": {
      }
    }

    Visual Studio Code Cline Sample

    1. vsCode install cline plugin
    2. mcp server config config mcp server
    3. use
      1. get table list get table list1 get table list2
      2. Initialize dialogue Initialize dialogue
      3. query: what is the sum salary query

    Contact:

    https://discord.gg/mFa3yeq9 Datafocus

    Visit original content creator repository https://github.com/FocusSearch/focus_mcp_data
  • focus_mcp_data

    FOCUS DATA MCP Server [中文]

    A Model Context Protocol (MCP) server enables artificial intelligence assistants to directly query data results. Users can obtain data results from DataFocus using natural language.

    Features

    • Register on DataFocus to open an application space, and import (directly connect to) the data tables to be analyzed.
    • Select Datafocus data table initialization dialogue
    • Natural language data acquisition results

    Prerequisites

    • jdk 23 or higher. Download jdk
    • gradle 8.12 or higher. Download gradle
    • register Datafocus to obtain bearer token:
      1. Register an account in Datafocus
      2. Create an application
      3. Enter the application
      4. Admin -> Interface authentication -> Bearer Token -> New Bearer Token bearer token

    Installation

    1. Clone this repository:
    git clone https://github.com/FocusSearch/focus_mcp_data.git
    cd focus_mcp_data
    1. Build the server:
    gradle clean
    gradle bootJar
    
    The jar path: build/libs/focus_mcp_data.jar

    MCP Configuration

    Add the server to your MCP settings file (usually located at ~/AppData/Roaming/Code/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json):

    {
      "mcpServers": {
        "focus_mcp_data": {
          "command": "java",
          "args": [
            "-jar",
            "path/to/focus_mcp_data/focus_mcp_data.jar"
          ],
          "autoApprove": [
            "tableList",
            "gptText2DataInit",
            "gptText2DataData"
          ]
        }
      }
    }

    Available Tools

    1. tableList

    Get table list in datafocus.

    Parameters:

    • name (optional): table name to filter
    • bearer (required): bearer token

    Example:

    {
      "name": "test",
      "bearer": "ZTllYzAzZjM2YzA3NDA0ZGE3ZjguNDJhNDjNGU4NzkyYjY1OTY0YzUxYWU5NmU="
    }

    2. gptText2DataInit

    Initialize dialogue.

    Parameters:

    • names (required): selected table names
    • bearer (required): bearer token
    • language (optional): language [‘english’,’chinese’]

    Example:

    {
      "names": [
        "test1",
        "test2"
      ],
      "bearer": "ZTllYzAzZjM2YzA3NDA0ZGE3ZjguNDJhNDjNGU4NzkyYjY1OTY0YzUxYWU5NmU="
    }

    3. gptText2DataData

    Query data results.

    Parameters:

    • chatId (required): chat id
    • input (required): Natural language
    • bearer (required): bearer token

    Example:

    {
      "chatId": "03975af5de4b4562938a985403f206d4",
      "input": "max(age)",
      "bearer": "ZTllYzAzZjM2YzA3NDA0ZGE3ZjguNDJhNDjNGU4NzkyYjY1OTY0YzUxYWU5NmU="
    }

    Response Format

    All tools return responses in the following format:

    {
      "errCode": 0,
      "exception": "",
      "msgParams": null,
      "promptMsg": null,
      "success": true,
      "data": {
      }
    }

    Visual Studio Code Cline Sample

    1. vsCode install cline plugin
    2. mcp server config config mcp server
    3. use
      1. get table list get table list1 get table list2
      2. Initialize dialogue Initialize dialogue
      3. query: what is the sum salary query

    Contact:

    https://discord.gg/mFa3yeq9 Datafocus

    Visit original content creator repository https://github.com/FocusSearch/focus_mcp_data
  • Linux-Process-and-Thread-Scheduling

    Linux-Process-and-Thread-Scheduling

    This repository has three folders. One demonstrates using Linux scheduling policies to schedule 3 threads running parallely. Second is
    using Linux Scheduling policies to demonstrate process scheduling among 3 parallel processes. Third is the implementation of a Custom Syscall in Linux.


    Below are the explainations for each of them:

    Linux Thread Scheduling

    I am launching three threads, each of which relies on three different
    functions, countA(), countB() and countC() respectively. Each function does the same
    thing, i.e. counts from 1 – 2^32. The following is the detailed specification
    of each of the threads, to being with:

    1. Thread 1 (Thr-A()): Uses SCHED OTHER scheduling discipline
      with standard priority (nice:0).

    2. Thread 2 (Thr-B()): Uses SCHED RR scheduling discipline with
      default priority.

    3. Thread 3 (Thr-C()): Uses SCHED FIFO scheduling discipline with
      default priority.

    Each of these threads times the process of counting from 1 – 2^32. I have
    used the clock gettime() function for obtaining the actual time ticks that
    have been used to compute how long it took to complete a function.

    After that, I am benchmarking these three schedulers by changing the scheduling
    classes of the three threads (keeping the other scheduling priorities the same),
    against the counting program.

    For these cases, I am using pthread schedsetparam() and
    related functions for the same. After running a test whose outputs have been stored in the files thrA.txt, thrB.txt, thrC.txt respectively,
    I am generating histograms [file named plot.ipynb] showing which scheduler completes the task when, depending upon the scheduling
    policy.

    I have chosen different colors for the histogram bars, with one axis
    showing the time to complete, and the other showing the thread priorities. For our benchmarking, we have chosen 10 values each.

    To run this on your system:

    • Clone the repository
    • Open the “threadScheduling” directory
    • Run make
    • Input the priority values that you need to mention for each of the threads, and, enjoy 🙂

    Note: A key difference between linux thread scheduling policies is that for the policies SCHED_RR and SCHED_FIFO the priority value
    is can be set by us. Whereas for the scheduling policy SCHED_OTHER, priority is always default (i.e. 0) and we only change the nice value associated with it.

    Linux Process Scheduling

    This part involves creating three process, instead of the three
    threads. Each of these process involves compiling a copy of the Linux kernel source (with the minimal config, download by clicking here). The three processes are created with fork() and thereafter the child processes is using the execl() family system calls to run a different a different bash script(namely scriptA.sh, scriptB.sh and scriptC.sh), each
    of which comprises of the commands to compile a copy of the kernel. To
    time the execution, the parent process is getting the clock timestamp (using
    clock gettime()) before the fork and after each process terminates returns.

    After running the three compiling processes parallely, we get the time value taken by each scheduling policies and plot a histogram
    using those values [present in the file plot2.ipynb file].

    To compile the linux kernel using this process scheduling and scripts, follow the following steps:

    • Have the linux kernel [whichever version u want to be compiled] downloaded in your VM.
    • Make 3 directories namely “a”, “b” and “c” respectively.
    • Make sure the un-tar file of linux kernel is present in each of these directories.
    • Run make outside these directories and bingo! you’ll have 3 linux kernels being compiled simultaneously in your VM.

    Key-point: In order to run the 3 processes paraellely and also to track the time taken by each process to complete, we were required 5 fork() calls.

    Simple-Syscall-Implementation

    Explanation

    Firstly add the new syscall to the table of already existing syscalls
    in build/linux-5.xx.xx/arch/x86/entry/syscalls/syscall_64.tbl add

    451 kern_2D_memcpy sys_kern_2D_memcpy
    

    Then implement the same syscall in sys.c located at buiild/linux-5.xx.xx/kernal/sys.c

    Where we define the following function

    SYSCALL_DEFINE4(kern_2D_memcpy, float *, MAT1, float *, MAT2, int, ROW, int, COL)
    

    We take the pointers to the two float matrices where MAT1 is the destination matrix and MAT2 is the source matrix.
    We create a new matrix of dimensions ROWxCOL in the kernel space to which we then copy the contents of MAT2 using copy_from_user and then we copy it to MAT1 using copy_to_user
    If any of the above are not possible we return -EFAULT else return 0 incase of success.

    Building/Compiling the syscall

    After adding the syscalls we need to run the following commands to configure our kernel

    make
    
    make modules_install
    
    cp arch/x86_64/boot/bzImage /boot/vmlinuz-linux-5.19.9-gb0ccfee715-dirty
    
    cp System-5.19.9.map System-5.19.9-gb0ccfee715-dirty.map
    
    mkinitcpio -k 5.19.9-gb0ccfee715-dirty -g /boot/initramfs-linux-5.19.9-gb0ccfee715-dirty.img
    
    grub-mkconfig -o /boot/grub/grub.cfg
    
    reboot
    

    Test the syscall

    Make the test files in any directory and then run

    gcc test.c -o test
    ./test
    

    Note: Each of the folders have a readMe.txt of their own too, inside. You can use those for further reference.

    Thank you for visiting. Hope it helps.

    Made with 💙 by Ashutosh Gera

    Visit original content creator repository
    https://github.com/Ashutosh-Gera/Linux-Process-and-Thread-Scheduling

  • TemplateNaminakiky

    Material for PACE USA

    A Material Design theme for PACE USA

    GitHub Action Downloads Chat on Gitter Python Package Index

    Create a branded static site from a set of Markdown files to host the documentation of your Open Source or commercial project – customizable, searchable, mobile-friendly, 40+ languages. Set up in 5 minutes.

    A demo is worth a thousand words — check it out at https://demov1-learningcyberonline.blogspot.com.

    Features

    • It’s just Markdown — write your technical documentation in plain Markdown – no need to know HTML, JavaScript, or CSS. Material for MkDocs will do the heavy lifting and convert your writing to a beautiful and functional website.

    • Responsive by design — built from the ground up to work on all sorts of devices – from mobile phones to widescreens. The underlying fluid layout will always adapt perfectly to the available screen space.

    • Static, yet searchable — almost magically, your technical documentation website will be searchable without any further ado. Material for MkDocs comes with built-in search – no server needed – that will instantly answer your users’ queries.

    • Many configuration options — change the color palette, font families, language, icons, favicon and logo. Add a source repository link, links to your social profiles, Google Analytics and Disqus – all with a few lines of code.

    • Truly international — thanks to many contributors, Material for MkDocs includes translations for more than 40 languages and offers full native RTL (right-to-left) support for languages such as Arabic, Persian (Farsi) and Hebrew.

    • Accessible — Material for MkDocs provides extensible keyboard navigation and semantic markup including role attributes and landmarks. Furthermore, the layout is entirely based on rem values, respecting the user’s default font size.

    • Beyond GitHub Markdown — integrates natively with Python Markdown Extensions, offering additional elements like callouts, tabbed content containers, mathematical formulas, critic markup, task lists, and emojis.

    • Modern architecture — Material for MkDocs’s underlying codebase is built with TypeScript, RxJS, and SCSS, and is compiled with Webpack, bringing excellent possibilities for theme extension and customization.

    For other installation methods, configuration options, and a demo, visit squidfunk.github.io/mkdocs-material

    Users

    If you’re using this project a lot, consider sponsoring it! This will give me the opportunity to sustain my efforts maintaining it. Every contribution counts, no matter how small!

    License

    MIT License

    Copyright (c) 2016-2020 Martin Donath

    Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

    The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

    THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

    Visit original content creator repository https://github.com/learningcyber/TemplateNaminakiky
  • ui-modernization

    Introduction

    To learn more about all of U.S. Digital Response’s work, see our website. Get in touch with us by filling out our contact form.

    This guide can be found on GitBook and on GitHub.

    Many of the limitations of unemployment insurance systems existed before the pandemic hit; the difference is that before the pandemic, unemployment claims were predictable enough that agencies could appropriately staff up to account for the constraints. When COVID-19 began to spread and businesses started en masse reducing work hours, conducting layoffs, or closing all together, unforeseen and unprecedented numbers of people were filing for unemployment insurance benefits. Because the UI systems are set up to scale with the hiring and firing of individuals, many states are still months behind in processing claims.

    The human toll of these backlogs is real. Without the UI benefits delivered in a timely manner, many more people have had to make impossible financial decisions: food or rent? Utilities bill or medicine?

    The Unemployment Insurance Modernization team at USDR is working to understand the constraints under which UI systems operate and, by partnering with those agencies, evaluate, plan, and/or implement solutions to help them become more effective.

    Visit original content creator repository
    https://github.com/usdigitalresponse/ui-modernization