Blog

  • Project-1—Automated-ELK-Stack-Deployment

    Automated ELK Stack Deployment

    The files in this repository were used to configure the network depicted below.

    Network Diagram

    These files have been tested and used to generate a live ELK deployment on Azure. They can be used to either recreate the entire deployment pictured above. Alternatively, select portions of the Configuration and YAML files may be used to install only certain pieces of it, such as Filebeat.

    This document contains the following details:

    • Description of the Topology
    • Access Policies
    • ELK Configuration
    • Beats in Use
    • Machines Being Monitored
    • How to Use the Ansible Build

    Description of the Topology

    The main purpose of this network is to expose a load-balanced and monitored instance of DVWA, the D*mn Vulnerable Web Application.

    Load balancing ensures that the application will be highly available, in addition to restricting inbound access to the network.

    What aspect of security do load balancers protect?

    • Load balancers are designed to take a load of traffic and distribute it across multiple resources preventing servers to overload.
    • Load balancers play an important role in security by defending against distributed denial-of-service (DDoS) attacks.

    What is the advantage of a jump box?

    • Jump box virtual machine is exposed on the public network to withstand malicious threats and attacks. It is also used to manage other systems and hardens security, it is treated as a single entryway to a server group from within your security zone.
    • The advantage of having a jump box is that it limits access to servers that are inaccessible over the network.

    Integrating an ELK server allows users to easily monitor the vulnerable VMs for changes to

    What does Filebeat watch for?

    • Filebeat: collects data and logs about the file system.

    What does Metricbeat record?

    • Metricbeat: collects machine metrics and statisics, such as uptime.

    The configuration details of each machine may be found below.

    Name Function IP Address Operating System Server
    Jump Box Gateway 104.43.255.56; 10.0.0.1 Linux Ubuntu Server 18.04 LTS
    Web-1 VM DVWA Server 10.0.0.5 Linux Ubuntu Server 18.04 LTS
    Web-2 VM DVWA Server 10.0.0.6 Linux Ubuntu Server 18.04 LTS
    Web-3 VM DVWA Server 10.0.0.7 Linux Ubuntu Server 18.04 LTS
    ELK Server Monitoring 20.242.105.231; 10.1.0.7 Linux Ubuntu Server 18.04 LTS

    Note: In addition to above, Azure has provisioned a load balancer in front of all the machines except for Jump-Box. The load balancer’s target are organized into the following availability zones: Web-1, Web-2, Web-3


    Access Policies

    The machines on the internal network are not exposed to the public Internet.

    Only the Jump Box Provisioner machine can accept connections from the Internet. Access to this machine is only allowed from the following IP addresses:

    • Add whitelisted IP addresses: Local Admin IP, Workstation (My Personal IP)

    Machines within the network can only be accessed by Workstation (My IP) and Jump Box Provisioner.

    Which machine did you allow to access your ELK VM?

    • Jump Box Provisioner IP: 10.0.0.4 via SSH Port 22

    What was its IP address?

    • Local Admin IP, Workstation (My Personal IP) via port TCP 5601

    A summary of the access policies in place can be found in the table below.

    Name Publicly Accessible Allowed IP Addresses Port Server
    Jump Box Yes Local Admin IP SSH 22 Ubuntu Server 18.04 LTS
    Web-1 VM No 10.0.0.5 SSH 22 Ubuntu Server 18.04 LTS
    Web-2 VM No 10.0.0.6 SSH 22 Ubuntu Server 18.04 LTS
    Web-3 VM No 10.0.0.7 SSH 22 Ubuntu Server 18.04 LTS
    Elk Server No Local Admin IP TCP 5601 Ubuntu Server 18.04 LTS

    Elk Configuration

    Ansible was used to automate configuration of the ELK machine. No configuration was performed manually, which is advantageous because…

    What is the main advantage of automating configuration with Ansible?

    • Ansible is an open source tool with simple configuration management, cloud provisioning and application development.
    • Allows you to deploy YAML playbooks.
    Click here to view Steps on Creating an ELK Server.

    We will create an ELK server within a virtual network. Specifically we will:

    • Create a new vNet
    • Create a Peer Network Connection
    • Create a new VM
    • Create an Ansible Playbook
    • Downloading and Configuring the Container
    • Launch and Expose the Container

    Creating a New vNet

    1. Create a new vNet located in the same resouce group you have been using.

      • Make sure this vNet is located in a new region and not the same region as your other VM’s.

      • Leave the rest of the settings at default.

      • Notice, in this example that the IP addressing is automatically created a new network space of 10.1.0.0/16. If your network is different (10.1.0.0 or 10.3.0.0) it is ok as long as you accept the default settings. Azure automatically creates a network that will work.

    Create a Peer Network Connection

    1. Create a Peer network connection between your vNets. This will allow traffic to pass between you vNets and regions. This peer connection will make both a connection from your first vNet to your second vNet and a reverse connection from your second vNet back to your first vNet. This will allow traffic to pass in both directions.

      • Navigate to ‘Virtual Network’ in the Azure Portal.

      • Select your new vNet to view it’s details.

      • Under ‘Settings’ on the left side, select ‘Peerings’.

      • Click the + Add button to create a new Peering.

      • Make sure your new Peering has the following settings:

        • A unique name of the connection from your new vNet to your old vNet.

          • Elk-to-Red would make sense
        • Choose your original RedTeam vNet in the dropdown labeled ‘Virtual Network’. This is the network you are connecting to your new vNet and you should only have one option.

        • Name the resulting connection from your RedTeam Vnet to your Elk vNet.

          • Red-to-Elk would make sense
      • Leave all other settings at their defaults.

    The following screenshots displays the results of the new Peering connections with your ELK vNet to your old vNet

    Create a new VM

    1. Creating a new VM

      • Creating a new Ubuntu VM in your virtual network with the following configures:
      • VM must have at least 4GB of RAM.
      • IP address must be same as public IP address.
      • The VM must be added to the new region in which you created your new vNet and create a new basic network security group for it.
      • After creating the VM make sure that it works by connecting to it from your Jump-box using ssh username@jump.box.ip
         ssh RedAdmin@104.43.255.56
      • Check your Ansible container: sudo docker ps

      • Locate the container name: sudo docker container list -a

      • Start the container: sudo docker container start peaceful_borg

      • Attach the container: sudo docker attach peaceful_borg

      • Copy the SSH key from the Ansible container on your jump box: cat ~/.ssh/id_rsa.pub

      • Configure a new VM using that SSH key.

    Configuring Container

    1. Downloading and Configuring Container

      • Configure your hosts file inside ansible: cd /etc/ansible/ configure nano /etc/ansible/hosts and input the IP addresses of your VM with ansible_python_intrepreter=/usr/bin/python3

      • Create a Playbook that installs Docker and configures the container

      • Run the ELK playbook:

         ansible-playbook install-elk.yml

    The following screenshot displays the result of running ELK installation YML file.

    Creating ELK Playbook

    The playbook implements the following tasks:

    Configure ELK VM with Docker

           	      - name: Configure ELK VM with Docker
            	hosts: elk
                    remote_user: RedAdmin
                    become: true
                    tasks:             

    Install Docker.io

           	      - name: Install docker.io
            	apt:
                      update_cache: yes
                      force_apt_get: yes
                      name: docker.io
                      state: present

    Install Python3-pip

           	      - name: Install python3-pip
            	apt:
                      force_apt_get: yes
                      name: python3-pip
                      state: present

    Install Docker Python Module

           	      - name: Install python3-pip
            	apt:
                      force_apt_get: yes
                      name: python3-pip
                      state: present

    Increase virtual memory

           	      - name: Use more memory
            	sysctl:
                      name: vm.max_map_count
           		  value: 262144
          		  state: present
           		  reload: yes

    Download and Launch a Docker ELK Container with ports 5601, 9200, 5044.

           	      - name: Download and launch a docker elk container
             	docker_container:
               	  name: elk
               	  image: sebp/elk:761
               	  state: started
               	  restart_policy: always
     		  ports:
              	    - 5601:5601
              	    - 9200:9200
              	    - 5044:5044

    Enable Service Docker on Boot

           	      - name: Enable service docker on boot
            	sysmd:
                      name: docker
           		  enabled: yes

    After the ELK container is installed, SSH into your container ssh username@your.ELK-VM.External.IP and double check that elk-docker container is running.

       ssh RedAdmin@10.1.0.7

    The screenshot displays the results when successfully connected to ELK via SSH

    The following screenshot displays the result of running docker ps after successfully configuring the ELK instance.

    docker ps output

    Restrict access to the ELK VM using Azure network security groups.

    • You will need to add your public IP address to a whitelist. Opening virtual network existing NSG and create an incoming rule for your security group that allows TCP traffic port 5601 from your public IP address.

    Verify that you can access your server by navigating to http://[your.ELK-VM.External.IP]:5601/app/kibana. Use the public IP address of your new VM.

       http://20.242.105.231:5601/app/kibana

    You should see this page:

    If you can get on this page, congratulations! You have successfully created an ELK Server!


    Target Machines & Beats

    This ELK server is configured to monitor the following machines:

    • Web-1 VM: 10.0.0.5
    • Web-2 VM: 10.0.0.6
    • Web-3 VM: 10.0.0.7

    We have installed the following Beats on these machines:

    • Filebeat
    • Metricbeat

    These Beats allow us to collect the following information from each machine:

    Filebeat:

    • Filebeat monitors the specified log file or location, collects log events, and forwards them to Elasticsearch or Logstash for indexing.
    • Filebeat is used to collect and send log files.
    • Filebeat can be installed on almost any operating system, including Docker containers. It also contains internal modules for specific platforms such as Apache, MySQL, and Docker, including default configurations and Kibana objects for these platforms.

    Metricbeat:

    • Metricbeat helps monitor your server by collecting metrics and statistics that are collected and sent to the specific from the systems and services running on your server.
    • Like Filebeat, Metricbeat supports an internal module for collecting statistics from a particular platform.
    • You can use these modules and a subset called metric sets to configure how often Metricbeat collects metrics and the specific metrics it collects.
    • We use it for failed SSH login attempts, sudo escalations, and CPU/RAM statistics.
    Click here to view Steps on Creating Filebeat and Metricbeat.

    We will create two tools that will help our ELK monitoring server which are Filebeat and Metricbeat. Specifically we will:

    • Install Filebeat and Metricbeat on the Web VM’s
    • Create the Filebeat and Metricbeat Configuration File
    • Create a Filebeat and Metricbeat Installation Playbook
    • Verify Filebeat and Metricbeat is Installed

    Installing Filebeat and Metricbeat on DVWA Container

    1. Make sure that ELK container is running:

      • Navigate to Kibana: http://[your.ELK-VM.External.IP]:5601/app/kibana. Use public IP address of the ELK server that you created.

      • If Kibana is not up and running, open a terminal on your PC and SSH into ELK Server and start your ELK-docker.

        • Run docker container list -a
        • sudo docker start elk
    2. Use ELK’s server GUI to navigate and install Filebeat instructions for Linux.

      • Navigate to your ELK server’s IP:
        • Click on Add log data
        • Select System Logs
        • Click on DEB tab under Getting Started
    3. Using ELK’s server GUI to navigate and install Metricbeat instructions for Linux.

      • Naviate to your ELK’s server’s IP:
        • Click on ‘Add metric data`
        • Select Docker metrics
        • Click on DEB tab under Getting Started

    Create Filebeat and Metricbeat Configuration File

    1. We will create and edit the Filebeat and Metricbeat configuration file.

      • Start by opening a terminal and SSH into your Jump-box and start up the Ansible container.
      • Navigate to our Ansible container file and edit the Filebeat Configuration and Metricbeat Configuration.yml configuration files.
      • Username will be elastic and the password is changeme

    Scroll down to line #1106 and replace the IP address with the IP address of your ELK VM.

    	output.elasticsearch:
    	hosts: ["10.1.0.7:9200"]
    	username: "elastic"
    	password: "changeme"

    Scroll down to line #1806 and replace the IP address with the IP address of your ELK VM.

      	setup.kibana:
       	host: "10.1.0.7:5601"

    When finished save both files in /etc/ansible/files

    Creating Filebeat and Metricbeat Installation Playbook

    1. Create Filebeat and Metricbeat Playbooks and save it in /etc/ansible/roles directory.

    First, nano filebeat-playbook.yml with Filebeat template below:

    - name: installing and launching filebeat
      hosts: webservers
      become: yes
      tasks:
    
      - name: download filebeat deb
        command: curl -L -O curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.6.1-amd64.deb
    
      - name: install filebeat deb
        command: dpkg -i filebeat-7.6.1-amd64.deb
    
      - name: drop in filebeat.yml
        copy:
          src: /etc/ansible/files/filebeat-config.yml
          dest: /etc/filebeat/filebeat.yml
    
      - name: enable and configure system module
        command: filebeat modules enable system
    
      - name: setup filebeat
        command: filebeat setup
    
      - name: start filebeat service
        command: service filebeat start
    
      - name: enable service filebeat on boot
        systemd:
          name: filebeat
          enabled: yes

    Next, nano metricbeat-playbook.yml with Metricbeat template below:

    - name: Install metric beat
      hosts: webservers
      become: true
      tasks:
        # Use command module
      - name: Download metricbeat
        command: curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-7.6.1-amd64.deb
    
        # Use command module
      - name: install metricbeat
        command: dpkg -i metricbeat-7.6.1-amd64.deb
    
        # Use copy module
      - name: drop in metricbeat config
        copy:
          src: /etc/ansible/files/metricbeat-config.yml
          dest: /etc/metricbeat/metricbeat.yml
    
        # Use command module
      - name: enable and configure docker module for metric beat
        command: metricbeat modules enable docker
    
        # Use command module
      - name: setup metric beat
        command: metricbeat setup
    
        # Use command module
      - name: start metric beat
        command: service metricbeat start
    
        # Use systemd module
      - name: enable service metricbeat on boot
        systemd:
          name: metricbeat
          enabled: yes
    
    1. Run both playbooks to confirm that it works. ansible-playbook filebeat-playbook.yml and ansible-playbook metricbeat-playbook.yml

    This screenshot displays the results for filebeat-playbook:

    This screenshot displays the results for metricbeat-playbook:

    1. Verify that the playbook works by navigating to the Filebeat and Metricbeat installation page on the ELK Server GUI and under Step 5: Module Status and click on Check Data.

    The screenshot display the results of ELK stack successfully receiving logs.

    The screenshot display the results of ELK stack successfully receiving metrics.


    Using the Playbook

    In order to use the playbook, you will need to have an Ansible control node already configured. Assuming you have such a control node provisioned:

    SSH into the control node and follow the steps below:

    • Update the hosts file /etc/ansible/hosts to include ELK server IP 10.1.0.7

    ELK Host

    • Run the ELK, Filebeat and Metricbeat playbooks:
    	ansible-playbook install-elk.yml
    	ansible-playbook filebeat-playbook.yml
    	ansible-playbook metricbeat-playbook.yml
    
    • Navigate to http://[your.ELK-VM.External.IP]:5601/app/kibana to check that the installation worked as expected.
    Click here to view how to verify Elk Server is working with Filebeat and Metricbeat.

    We will verify ELK Server is working with Filebeat and Metricbeat by pulling logs and metrics from our web VM servers.

    Three tasks is implemented to test if the ELK server is working by pulling both logs and metrics from our web VM servers we create by:

    1. SSH Barrage: Generating a high amount of failed SSH login attempts.

    • Run ssh username@ip.of.web.vm
    • An error should occur as shown in the screenshot below:

    • Write a script that creates 1000 login attempts on the webserver 10.0.0.5.
       for i in {1..1000};
       do
        ssh sysadmin@10.0.0.5;
       done;
    • Write a script that creates a nested loop that generates SSH login attempts across all 3 of your web-servers VM’s.
       while true;
       do
        for i in {5..7};
         do
          ssh sysadmin@10.0.0.$i;
         done;
       done

    The screenshot display the results of Kibana logs when running the scripts.

    2. Linux Stress: Generating a high amount of CPU usage on VM servers to verify that Kibana picks up data.

    • While in Jump-box go inside the container and login to your web server VM.
       $sudo docker container list -a 
       $sudo docker start [CONTAINER NAME]
       $sudo docker attach [CONTAINER NAME]
    • SSH into your web VM: ssh username@web.ip.vm
    • Run command: sudo apt install stress which installs a stress program.
    • Run command: sudo stress --cpu 1 which allows stress to run for a minute.
    • View metrics on Kibana which will show CPU usage on screenshot display below:

    3. wget-DoS: Generating a high amount of web requests to our VM servers to make sure that Kibana picks up data.

    • Log into Jump-Box VM and run command wget ip.of.web.vm: you will receive an index.html file downloaded from your web VM to your jump-box.
    • Write a loop script that will create 1000 web requests on the 10.0.0.5 server and downloaded files onto your jump-box.
       for i in {1..1000};
       do
        wget 10.0.0.5;
       done;
    • View metrics on Kibana which will show the Load, Memory Usage, and Network Traffic on screenshot display below:


    As a Bonus, provide the specific commands the user will need to run to download the playbook, update the files, etc.

    Commands Explanation
    ssh username@[Jump.box.IP] Connect to Jump-Box VM
    ssh-keygen Generates a public SSH key to access (Needed to set up VM)
    cat ~./ssh/id_rsa.pub Read the SSH keygen
    docker ps Docker command to list running containers
    docker start [CONTAINER] Start a container
    docker attach [CONTAINER] Attaches to a running container
    docker stop [CONTAINER] Stop a running container
    cd /etc/ansible Change directory to /etc/ansible
    nano /etc/ansible/hosts Edit hosts file
    nano /etc/ansible/ansible.cfg Edit ansible configuration file
    nano filebeat-config.yml Edit Filebeat configuration yml file
    nano filebeat-playbook.yml Edit Filebeat playbook yml file
    nano metricbeat-config.yml Edit Metricbeat configuration yml file
    nano metricbeat-playbook.yml Edit Metricbeat playbook yml file
    ansible-playbook [location][filename.yml] Execute ansible playbook
    curl [options/URL] Client URL: Enables data transfer over various network protocols
    dpkg -i [package-file] Package manager for Debian: -i: installing package file
    exit Cause the shell to exit

    Resources

    Visit original content creator repository https://github.com/raospiratory/Project-1—Automated-ELK-Stack-Deployment
  • block-stake

    Block Stake

    Block Stake are a set of contracts that facilitate staking following the MasterChef staking algorithm.

    Try dApp
    |
    View Demo

    Table of Contents
    1. About The Project

    2. Installation
    3. Usage
    4. Contributing

    About The Project

    Block Stake is a staking contract following a modified version of Sushiswap’s MasterChef algorithm that takes in an upgradeable token as stake asset and issues rewards in a different token.

    The users who hold the BlockStake (BST) token can earn rewards for their investments by sharing part of the rewards that are distributed per block based on their invested BST.

    (back to top)

    Built With

    Back

    • Solidity
    • Ethereum
    • Hardhat
    • Openzeppelin Contracts
    • Ethers.js

    Front

    • NextJS
    • ReactJS

    Testing

    • Chai
    • Mocha

    (back to top)

    Installation

    1. Clone the repo
      git clone https://github.com/kingahmedino/block-stake.git && cd block-stake
    2. Install dependencies
      yarn install

    (back to top)

    Usage

    Try running some hardhat tests:

    npx hardhat test

    Try to deploy contract to testnet, Gõerli is the default:

    npx hardhat run scripts/deploy.js

    or

    Edit hardhat.config.js to add more networks to deploy to:

    networks: {
        goerli: {
          url: process.env.NETWORK,
          accounts: [process.env.PRIVATE_KEY],
        },
      }

    (back to top)

    Contributing

    Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

    If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag “enhancement”.
    Don’t forget to give the project a star! Thanks again!

    1. Fork the Project
    2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
    3. Commit your Changes (git commit -m 'Add some AmazingFeature')
    4. Push to the Branch (git push origin feature/AmazingFeature)
    5. Open a Pull Request

    (back to top)

    Visit original content creator repository
    https://github.com/kingahmedino/block-stake

  • book-store-backend

    Book Store

    Description

    The E-Commerce Database Management System (EC-DBMS) is designed to store, process, retrieve, and analyze data related to
    online sales activities conducted by customers from home. The system maintains information about customers, vendors,
    products, product categories, orders, and couriers. It allows vendors to create online stores, customers to browse
    products, and administrators to approve or reject shop requests while managing shop categories. The system tracks items
    in each shop and facilitates online purchases without the need for customers to visit physical stores. This online
    shopping platform relies on the internet as the primary means of selling goods and services, displaying products in a
    categorized format. Customers can view product details, check prices, and place orders using their registered accounts,
    with payment made upon delivery.

    Technologies

    • Spring Boot
    • Spring MVC
    • Spring Security
    • Spring Data JPA
    • Thymeleaf
    • H2 database
    • Tests

    https://github.com/Trandinhdongkhanh/G2WebStoreV2/blob/main/src/main/java/com/hcmute/g2webstorev2/controller/ProductController.java
    https://www.codejava.net/frameworks/spring-boot/spring-security-jwt-authentication-tutorial

    Truy cap swagger UI: http://localhost:8081/swagger-ui/index.html#/checkout-controller

    Send Email: https://www.geeksforgeeks.org/how-to-send-email-with-thymeleaf-template-in-spring-boot/
    https://www.youtube.com/watch?v=Sst9O5C6WhQ

    Visit original content creator repository
    https://github.com/greeneley/book-store-backend

  • Tony.Interceptor

    Welcome to Tony.Interceptor

    This is a project written by C# that can intercept instance method you want

    You can do something before and do something after when you invoke the method

    why use Tony.Interceptor

    You can image you have write thousands of methods.one day ,your boss requires you to add the log for each method,you are driven mad.Would you want to write the log code in each method?

    or you use the third part AOP Framework?That is very heavy

    No,this is the reason you use Tony.Intercetor!!!

    usage

    1.define a class that implement the interface IInterceptor:

    so you can handle the BeforeInvoke and AfterInvoke

    class LogInterceptor : IInterceptor
        {
            public void AfterInvoke(object result, MethodBase method)
            {
                Console.WriteLine($"执行{method.Name}完毕,返回值:{result}");
            }
    
            public void BeforeInvoke(MethodBase method)
            {
                Console.WriteLine($"准备执行{method.Name}方法");
            }
        }

    2.markup the class or method that you want to Intercept

    First of all,the class must extend to Interceptable,in fact,the class Interceptable extends from ContextBoundObject,just put the class into the environment context

    Then,you can use InterceptorAttribute to mark the class or a instance method in the class

    If you mark the class ,it intercepts all the public instance method by default.

    If you do not want to intercept a method int the marked class,you can use InterceptorIgnoreAttribute

    [Interceptor(typeof(LogInterceptor))]
        public class Test:ContextBoundObject
        {
            public void TestMethod()
            {
                Console.WriteLine("执行TestMethod方法");
            }
            public int Add(int a, int b)
            {
                Console.WriteLine("执行Add方法");
                return a + b;
            }
            [InterceptorIgnore]
            public void MethodNotIntercept()
            {
                Console.WriteLine("MethodNotIntercept");
            }
        }

    3.create the instance from the class and invoke the method

    class Program
    {
        static void Main(string[] args)
        {
            Test test = new Test();
            test.TestMethod();
            test.Add(5,6);
            test.MethodNotIntercept();
            Console.Read();
        }
    }

    Global Setting

    this is a switch that can enable or disable the interceptor.the switch is:

    public static bool IsEnableIntercept { get; set; } = true;

    the default value is true.if we set to false,the interceptor we have deployed is invalid

    Visit original content creator repository
    https://github.com/lishuangquan1987/Tony.Interceptor

  • api-express-user-auth

    Api User Auth

    • Code developed for academic purposes.
    • User management CRUD API with express.

    Install and Run

    1. Clone the repository : git clone https://github.com/JrSchmidtt/api-express-user-auth
    2. Install node.js to run
    3. Install Visual Studio Code to edit
    4. Install HeidiSQL and import database.sql
    5. Open the powershell Terminal in Visual studio and run the command npm install in folder to install the dependencies
    6. Run the command node index.js in the folder with the usage examples

    Endpoints

    GET /user

    Returns the list of all registered users only for authenticated administrators.

    request:

    var request = require('request');
    var options = {
      'method': 'GET',
      'url': 'http://localhost:8080/user',
      'headers': {
        'Authorization': 'Bearer AUTHENTICATED-ADMIN-ACCOUNT-TOKEN'
      }
    };
    request(options, function (error, response) {
      if (error) throw new Error(error);
      console.log(response.body);
    });

    response:

    [
        {
            "id": 19,
            "name": "Dixie",
            "email": "dfurber4@sakura.ne.jp",
            "role": 0
        },
        {
            "id": 25,
            "name": "admin",
            "email": "admin@server.com",
            "role": 1
        }
    ]

    GET/user/:id

    Returns information of a specific account.

    request:

    var request = require('request');
    var options = {
    var  request  =  require('request');
    var  options  = {
    'method': 'GET',
    'url': 'http://localhost:8080/user/25',
    'headers': {'Authorization': 'Bearer AUTHENTICATED-ADMIN-ACCOUNT-TOKEN',}
    };
    request(options, function (error, response) {
    if (error) throw  new  Error(error);
    console.log(response.body);
    });

    response:

    {
    "id":  25,
    "email":  "admin@server.com",
    "role":  1,
    "name":  "admin"
    }

    POST /user

    Create a new user account.

    request:

    var request = require('request');
    var options = {
      'method': 'POST',
      'url': 'http://localhost:8080/user',
      'headers': {
        'Authorization': 'Bearer AUTHENTICATED-ADMIN-ACCOUNT-TOKEN',
        'Content-Type': 'application/json'
      },
      body: JSON.stringify({
        "email": "admin@server.com",
        "name": "admin",
        "password": "0808",
        "role": "1"
      })
    
    };
    request(options, function (error, response) {
      if (error) throw new Error(error);
      console.log(response.body);
    });

    response:

    {
        "status": "200",
        "desc": "User has been created",
        "user": "admin",
        "email": "admin@server.com"
    }

    POST /login

    Create an admin authorized token.

    request:

    var  request  =  require('request');
    var  options  = {
    'method': 'POST',
    'url': 'http://localhost:8080/login',
    'headers': {
    'Content-Type': 'application/json'
    },
    body: JSON.stringify({
    
    "email": "admin@server.com",
    "password": "SUPER-SECURE-PASSWORD"})
    
    };
    request(options, function (error, response) {
    if (error) throw  new  Error(error);
    console.log(response.body);
    });

    response:

    {
    "token":  "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJlbWFpbCI6ImFkbWluQHNlcnZlci"
    }

    POST /recoverPassword

    Generate a password reset token.

    request:

    var  request  =  require('request');
    var  options  = {
    'method': 'POST',
    'url': 'http://localhost:8080/recoverPassword',
    'headers': {
    'Authorization': 'Bearer AUTHENTICATED-ADMIN-ACCOUNT-TOKEN',
    'Content-Type': 'application/json'
    },
    body: JSON.stringify({
    
    "token": "username@server.com"
    
    })
    };
    request(options, function (error, response) {
    if (error) throw  new  Error(error);
    console.log(response.body);
    });

    response:

    {
    "status":  "200",
    "desc":  "Recover Token has been created",
    "token":  "dc907a89-9097-47c1-b951-3f64244ff59a",
    }

    POST /changePassword

    Change an account password.

    request:

    var request = require('request');
    var options = {
      'method': 'POST',
      'url': 'http://localhost:8080/changePassword',
      'headers': {
        'Authorization': 'Bearer AUTHENTICATED-ADMIN-ACCOUNT-TOKEN',
        'Content-Type': 'application/json'
      },
      body: JSON.stringify({
        "token": "9f97107e-d048-4c10-9c1a-60f09d2ca008",
        "password": "NEW-PASSWORD"
      })
    
    };
    request(options, function (error, response) {
      if (error) throw new Error(error);
      console.log(response.body);
    });

    response:

    {
    "status":  "200",
    "desc":  "Password updated"
    }

    POST/user/:id

    Update account information.

    request:

    var request = require('request');
    var options = {
      'method': 'POST',
      'url': 'http://localhost:8080/user/25',
      'headers': {
        'Authorization': 'Bearer AUTHENTICATED-ADMIN-ACCOUNT-TOKEN',
        'Content-Type': 'application/json'
      },
      body: JSON.stringify({
        "name": "New Name",
        "email": "new@email.com",
        "role": "1"
      })
    };
    request(options, function (error, response) {
      if (error) throw new Error(error);
      console.log(response.body);
    });

    response:

    {
    "status":  "200",
    "user":  "25",
    "desc":  "has been updated"
    }

    DELETE/user/:id

    Delete the account passed to the backend.

    request:

    var  request  =  require('request');
    var  options  = {
    'method': 'DELETE',
    'url': 'http://localhost:8080/user/9',
    'headers': {
    'Authorization': 'Bearer AUTHENTICATED-ADMIN-ACCOUNT-TOKEN'}
    };
    request(options, function (error, response) {
    if (error) throw  new  Error(error);
    console.log(response.body);
    });

    response:

    {
    "status":  "200",
    "user":  "17",
    "desc":  "has been deleted"
    }

    Contributing

    1. Fork the repository!
    2. Clone your fork: git clone https://github.com/JrSchmidtt/api-express-user-auth
    3. Create your feature branch: git checkout -b my-new-feature
    4. Commit your changes: git commit -am 'Add some feature'
    5. Push to the branch: git push origin my-new-feature
    6. Submit a pull request 😀

    Author

    Api User Auth © JrSchmidt.
    Authored and maintained by Schmidt#9639.

    Visit original content creator repository
    https://github.com/JrSchmidtt/api-express-user-auth

  • youtube-dl-wpf

    🎞⬇ Cube YouTube Downloader – youtube-dl-wpf

    Build Release

    WPF GUI for youtube-dl and yt-dlp.

    Home Settings

    Features

    • Follow 🎨 system color mode, or choose between 🌃 dark mode and 🔆 light mode.
    • Update youtube-dl/yt-dlp on startup.
    • List all available formats.
    • Override video, audio formats and output container.
    • Embed metadata into downloaded file.
    • Download and embed thumbnails.
    • Download whole playlists.
    • Select items from playlist to download.
    • Select types of subtitles (default, all languages, auto-generated) to download and embed.
    • Specify custom output template.
    • Specify custom download path.
    • Specify custom FFmpeg path.
    • Specify custom proxy.
    • Specify custom command-line arguments.

    Usage

    1. Download the pre-built binary or build it from source.
    2. Download yt-dlp or youtube-dl.
    3. It’s optional but highly recommended to also download FFmpeg. Otherwise you won’t be able to merge separate video and audio tracks.
    4. The framework-dependent binary requires an installed .NET Runtime to run. Alternatively, download the self-contained binary that bundles the runtime.
    5. Run youtube-dl-wpf.exe. Go to Settings. Set the path to youtube-dl/yt-dlp and FFmpeg.
    6. Go back to the home tab. Paste a video URL and start downloading! 🚀

    FAQ

    1. Q: The Download button is grayed out and I can’t click it!

      A: youtube-dl-wpf is a simple GUI wrapper. It doesn’t bundle any downloader with it. You have to download youtube-dl or yt-dlp for it to work. FFmpeg is required by youtube-dl and yt-dlp when merging separate video and audio tracks, which is the case for most formats on YouTube.

    2. Q: How can I use a proxy to download?

      A: Leave the proxy field empty to use system proxy settings. Otherwise the format is similar to how curl accepts proxy strings (e.g. socks5://localhost:1080/, http://localhost:8080/). Currently the upstream doesn’t accept socks5h protocol and treat socks5 as socks5h by always resolving the hostname using the proxy. This is tracked in this issue.

    3. Q: Downloading the whole playlist doesn’t work!

      A: It’s an upstream bug, just like many other issues you might discover. There’s nothing I can do. Just report the bug to yt-dlp or youtube-dl, whichever you use.

    4. Q: youtube-dl and yt-dlp behave differently!

      A: In some cases, yes, and youtube-dl-wpf tries to align their behavior by sending different options and arguments for different backends. See the backends documentation for more information.

    Known Issues

    • 🎉 No known issues!

    To-Do

    • v2.0 – The Parallel Update: download management and download queue for parallel downloads.

    Build

    Prerequisites: .NET 9 SDK

    Note for packagers: The application by default uses executable directory as config directory. To use user’s config directory, define the constant PACKAGED when building.

    Build with Release configuration

    dotnet build -c Release

    Publish as framework-dependent

    dotnet publish YoutubeDl.Wpf -c Release

    Publish as self-contained for Windows x64

    dotnet publish YoutubeDl.Wpf -c Release -r win-x64 --self-contained

    Publish as self-contained for packaging on Windows x64

    dotnet publish YoutubeDl.Wpf -c Release -p:DefineConstants=PACKAGED -r win-x64 --self-contained

    License

    © 2025 database64128

    Visit original content creator repository https://github.com/database64128/youtube-dl-wpf
  • rules_ent

    rules_ent

    Bazel rules for Ent code generation

    [WIP; still very hacky]

    Usage

    Unless done by gazelle, in the BUILD file of the schema package:

    load("@io_bazel_rules_go//go:def.bzl", "go_library")
    
    go_library(
        name = "schema",
        srcs = ["entity.go"],
        importpath = "github.com/cloneable/repo/path/to/schema",
        visibility = ["//:__subpackages__"],
        deps = [
            "@io_entgo_ent//:go_default_library",
            "@io_entgo_ent//schema/field:go_default_library",
        ],
    )

    Define a go_ent_library in the BUILD file of the target package. go_ent_library can be depend upon like a go_library.

    load("@com_github_cloneable_rules_ent//:defs.bzl", "go_ent_library")
    
    go_ent_library(
        name = "ent",
        entities = ["entity"],        # temporarily needed
        gomod = "//:go_mod",          # hopefully only temporarily needed
        importpath = "github.com/cloneable/repo/target/package/ent",
        schema = "//path/to/schema",  # go_library of schema package
        visibility = ["//:__subpackages__"],
    )

    Define a filegroup with go.mod and go.sum in the BUILD file of the root
    of the Go module because entc calls the go tool, which expects to find a
    proper module. This may change in the future.

    filegroup(
        name = "go_mod",
        srcs = [
            "go.mod",
            "go.sum",
        ],
        visibility = ["//:__subpackages__"],
    )

    In your WORKSPACE file:

    http_archive(
        name = "com_github_cloneable_rules_ent",
        sha256 = "...",
        strip_prefix = "rules_ent-...",
        urls = ["https://github.com/cloneable/rules_ent/..."],
    )

    Visit original content creator repository
    https://github.com/cloneable/rules_ent

  • dotfiles

    Dotfiles - A restore point for sync your settings and preferences in your toolbox.

    | A restore point for sync your settings and preferences in your toolbox.

    Why it’s awesome

    Dotfiles provides a fast setup for backup, restore, and sync the prefs and settings for your toolbox. Dotfiles might be the most important files on your machine and I hope it helps you as much as it helps me!

    Table of Contents

    Usage

    Start reading this document to see it is not difficult as you might have imagined. Just follow the step by step.

    NOTE: This tips is just a personal reference, use with care.

    Homebrew

    Homebrew is the package manager for macOS (or Linux).

    ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

    Dependencies

    • asdf (Manage multiple runtime versions)
    • Git Version Control

    brew install asdf git

    Apps

    • AppCleaner
    • BrowserStack
    • Caffeine
    • Docker
    • Figma
    • Franz
    • GoogleChrome
    • Kap
    • LogitechPresentation
    • MeetingBar
    • Pliim
    • Rectangle
    • Slack
    • Sketch
    • Sourcetree
    • VisualStudioCode

    brew install --cask appcleaner browserstacklocal caffeine docker figma franz google-chrome kap meetingbar logitech-presentation pliim rectangle slack sketch sourcetree visual-studio-code

    Plugins

    brew cask install qlcolorcode qlstephen qlmarkdown quicklook-json qlimagesize webpquicklook suspicious-package quicklookase qlvideo && mv ~/Downloads/.qlgenerator > ~/Library/QuickLook && qlmanage -r

    Visual Studio Code

    Visual Studio Code is a source-code editor developed by Microsoft.

    Plugins

    • Auto Close Tag
    • Auto Complete Tag
    • Auto Rename Tag
    • Auto Filename
    • Autotrim
    • Better Comments
    • Browser Preview
    • Code Intellicode
    • Code Runner
    • Code Settings Sync
    • Debugger for Chrome
    • Docker
    • DotENV
    • Dracula Theme
    • EditorConfig
    • ESlint
    • Git Lens
    • Git Ignore
    • GraphQL
    • HTML CSS Class Completion
    • Java
    • JavaScript Snippets
    • Jupyter
    • LiveShare
    • Lorem Ipsum
    • Maven
    • npm script
    • Path Intellisense
    • Prettier
    • Python
    • Remote Containers
    • Ruby
    • Run On Save
    • Sass Indented
    • Styled Components
    • Stylus
    • Sublime Keybindings
    • Terminal
    • TypeScript TSlint Plugin
    • Material Icon Theme
    • Wakatime
    • Whitespacer

    code --install-extension aaron-bond.better-comments && code --install-extension auchenberg.vscode-browser-preview && code --install-extension christian-kohler.path-intellisense && code --install-extension codezombiech.gitignore && code --install-extension dbaeumer.vscode-eslint && code --install-extension deerawan.vscode-whitespacer && code --install-extension dracula-theme.theme-dracula && code --install-extension eamodio.gitlens && code --install-extension EditorConfig.EditorConfig && code --install-extension eg2.vscode-npm-script && code --install-extension emeraldwalk.RunOnSave && code --install-extension esbenp.prettier-vscode && code --install-extension formulahendry.auto-close-tag && code --install-extension formulahendry.auto-complete-tag && code --install-extension formulahendry.auto-rename-tag && code --install-extension formulahendry.code-runner && code --install-extension formulahendry.terminal && code --install-extension GraphQL.vscode-graphql && code --install-extension JerryHong.autofilename && code --install-extension jpoissonnier.vscode-styled-components && code --install-extension mikestead.dotenv && code --install-extension ms-azuretools.vscode-docker && code --install-extension ms-python.python && code --install-extension ms-toolsai.jupyter && code --install-extension ms-vscode-remote.remote-containers && code --install-extension ms-vscode.sublime-keybindings && code --install-extension ms-vscode.vscode-typescript-tslint-plugin && code --install-extension ms-vsliveshare.vsliveshare && code --install-extension msjsdiag.debugger-for-chrome && code --install-extension NathanRidley.autotrim && code --install-extension PKief.material-icon-theme && code --install-extension rebornix.ruby && code --install-extension redhat.java && code --install-extension Shan.code-settings-sync && code --install-extension syler.sass-indented && code --install-extension sysoev.language-stylus && code --install-extension Tyriar.lorem-ipsum && code --install-extension VisualStudioExptTeam.vscodeintellicode && code --install-extension vscjava.vscode-java-debug && code --install-extension vscjava.vscode-java-dependency && code --install-extension vscjava.vscode-java-pack && code --install-extension vscjava.vscode-java-test && code --install-extension vscjava.vscode-maven && code --install-extension WakaTime.vscode-wakatime && code --install-extension wingrunr21.vscode-ruby && code --install-extension xabikos.JavaScriptSnippets && code --install-extension Zignd.html-css-class-completion

    After install, confirm all plugins installed:

    code --list-extensions

    Settings

    {
      "editor.detectIndentation": true,
      "editor.fontSize": 14,
      "editor.tabSize": 2,
      "files.autoSave": "onFocusChange",
      "files.autoSaveDelay": 0,
      "files.defaultLanguage": "en",
      "files.insertFinalNewline": true,
      "files.trimFinalNewlines": true,
      "files.trimTrailingWhitespace": true,
      "markdown.preview.fontSize": 14,
      "window.openFilesInNewWindow": "on",
      "workbench.colorTheme": "Dracula",
      "workbench.iconTheme": "material-icon-theme"
    }
    

    Google Chrome

    Google Chrome is a cross-platform web browser developed by Google.

    Plugins

    GitHub

    GitHub is provides hosting for software development version control using Git.

    SSH Settings

    1. Generating public/private rsa key pair
      ssh-keygen -t rsa -b 4096 -C "your_email@example.com"

    2. Start the ssh-agent in the background
      eval "$(ssh-agent -s)"

    3. Creating config file
      printf "ServerAliveInterval 60\nHost github.com\nHostname ssh.github.com\nPort 443\n" > ~/.ssh/config

    4. Add your SSH private key to the ssh-agent and store your passphrase in the keychain.
      ssh-add -K ~/.ssh/id_rsa

    5. Copy the SSH key to your clipboard.
      pbcopy < ~/.ssh/id_rsa.pub

    6. Now access GitHub SSH Settings to add the SSH key.
      https://github.com/settings/ssh/new

    GPG Settings

    1. Download and install the GPG command line tools.
      brew install gpg

    2. Generate a GPG key pair.
      gpg --full-generate-key

    3. Enter to accept the default kind of key
      RSA

    4. Enter the desired key size in bits.
      4096

    5. Enter the length of time the key should be valid.
      Press Enter to specify the default selection, indicating that the key doesn’t expire.

    6. Enter your GitHub email address.
      name@email.com

    7. Copy the GPG keys ID from the list of GPG keys. In this example, the GPG key ID is 3AA5C34371567BD2.

      $ gpg --list-secret-keys --keyid-format LONG
      
      /Users/hubot/.gnupg/secring.gpg
      ------------------------------------
      sec   4096R/3AA5C34371567BD2 2016-03-10 [expires: 2017-03-10]
      uid                          Hubot 
      ssb   4096R/42B317FD4BA89E7A 2016-03-10
      
    8. Paste your GPG Key Id. gpg --armor --export 3AA5C34371567BD2

    9. Copy your GPG key to add in your GitHub account. https://github.com/settings/gpg/new

    Git Settings

    Make it even easier version control ~/.gitconfig

    [user]
      name = CJ Patoilo
      email = cjpatoilo@gmail.com
      signingkey = "Your Sign In Key"
    
    [branch]
      autosetupmerge = always
    
    [alias]
      ci = commit -am
      lo = log --pretty=format:'%an - %h %s %ar'
      st = status
      br = branch
      sw = show
      df = diff
      fe = fetch
      mg = merge
      rb = rebase
      rt = remote -v
      co = checkout
      po = push origin
      pu = pull origin
      pom = push origin master
      pum = pull origin master
      com = checkout master
      pod = push origin develop
      pud = pull origin develop
      cod = checkout develop
      pog = push origin gh-pages
      pug = pull origin gh-pages
      cog = checkout gh-pages
      lg = log --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr)%Creset' --abbrev-commit --date=relative
    
    [core]
      excludesfile = ~/.gitignore_global
    
    [commit]
      template = ~/.stCommitMsg
    
    [difftool "sourcetree"]
      cmd = opendiff \"$LOCAL\" \"$REMOTE\"
      trustExitCode = true
      path =
    
    [filter "lfs"]
      clean = git-lfs clean -- %f
      smudge = git-lfs smudge -- %f
      required = true
      process = git-lfs filter-process
    
    [mergetool "sourcetree"]
      cmd = /Applications/Sourcetree.app/Contents/Resources/opendiff-w.sh \"$LOCAL\" \"$REMOTE\" -ancestor \"$BASE\" -merge \"$MERGED\"
      trustExitCode = true
    

    Terminal

    The Terminal is an interface that allows you to access the command line from the GUI.

    Bash Settings

    First create Bash Profile file touch ~/.bash_profile and add this content:

    source $HOME/.git-prompt.sh
    # PS1="\[\033[1;36m\]\u\[\033[32m\]$(__git_ps1 " (\W/%s)")\[\033[0m\] $ "
    PS1="\[\033[1;36m\]\u\[\033[32m\]\$(__git_ps1)\[\033[0m\] $ "
    
    alias cls="clear"
    alias reload="source $HOME/.bash_profile"
    alias www="cd $HOME/Www/"
    
    export PATH="/usr/local/bin:$PATH"
    export PATH="/usr/local/sbin:$PATH"
    
    . $HOME/.asdf/asdf.sh
    

    macOS

    macOS is a series of graphical operating systems developed and marketed by Apple Inc.

    xcode-select --install

    macOS Settings

    • LockScreen: Set Lock Message to show on login screen
      defaults write com.apple.loginwindow LoginwindowText -string "Found me? Shoot a mail to cjpatoilo@gmail.com to return me. Thanks!"

    • Bluetooth: Increase sound quality for Bluetooth headphones/headsets
      defaults write com.apple.BluetoothAudioAgent "Apple Bitpool Min (editable)" -int 40

    • Trackpad: Enable extra multifinger gestures
      defaults write com.apple.dock showMissionControlGestureEnabled -bool true defaults write com.apple.dock showAppExposeGestureEnabled -bool true defaults write com.apple.dock showDesktopGestureEnabled -bool true defaults write com.apple.dock showLaunchpadGestureEnabled -bool true

    • Trackpad: Enable right click with two fingers
      defaults write com.apple.driver.AppleBluetoothMultitouch.trackpad TrackpadRightClick -bool true defaults write com.apple.AppleMultitouchTrackpad TrackpadRightClick -bool true defaults -currentHost write NSGlobalDomain com.apple.trackpad.enableSecondaryClick -bool true defaults write com.apple.AppleMultitouchTrackpad TrackpadRightClick -bool true

    • Trackpad: Increment tracking speed
      defaults write NSGlobalDomain com.apple.trackpad.scaling -float 0.875

    • ScrollWheel: Increment tracking speed
      defaults write NSGlobalDomain com.apple.scrollwheel.scaling -float 0.215

    • Mouse: Increment tracking speed
      defaults write com.apple.driver.AppleBluetoothMultitouch.mouse MouseButtonMode TwoButton

    • Mouse: Allow right click button
      defaults write NSGlobalDomain com.apple.mouse.scaling -int 3

    • Finder: Show all filenames extensions
      defaults write NSGlobalDomain AppleShowAllExtensions -bool true

    • Finder: Show hidden files by default
      defaults write com.apple.finder AppleShowAllFiles -bool true

    • Finder: Show status bar
      defaults write com.apple.finder ShowStatusBar -bool true

    • Finder: Show path bar
      defaults write com.apple.finder ShowPathbar -bool true

    • Finder: Keep folders on top when sorting by name
      defaults write com.apple.finder _FXSortFoldersFirst -bool true

    • Finder: When performing a search, search the current folder by default
      defaults write com.apple.finder FXDefaultSearchScope -string "SCcf"

    • Finder: Disable the warning when changing a file extension
      defaults write com.apple.finder FXEnableExtensionChangeWarning -bool false

    • Finder: Avoid creating .DS_Store files on network or USB volumes
      defaults write com.apple.desktopservices DSDontWriteNetworkStores -bool true defaults write com.apple.desktopservices DSDontWriteUSBStores -bool true

    • Finder: Allow text selection in Quick Look
      defaults write com.apple.finder QLEnableTextSelection -bool true

    • Finder: Disable the warning when changing a file extension
      defaults write com.apple.finder FXEnableExtensionChangeWarning -bool false

    • TextEdit: Use plain text mode for new TextEdit documents
      defaults write com.apple.TextEdit RichText -int 0

    • TextEdit: Open and save files as UTF-8 in TextEdit
      defaults write com.apple.TextEdit PlainTextEncoding -int 4
      defaults write com.apple.TextEdit PlainTextEncodingForWrite -int 4

    • Screen: Save screenshots to the downloads
      defaults write com.apple.screencapture location -string "$HOME/Downloads"

    • Screen: Save screenshots in PNG format (other options: BMP, GIF, JPG, PDF, TIFF)
      defaults write com.apple.screencapture type -string "png"

    • Screen: Disable shadow in screenshots
      defaults write com.apple.screencapture disable-shadow -bool true

    • Spotlight: Change indexing order and disable some search results

        defaults write com.apple.spotlight orderedItems -array \
          '{"enabled" = 1;"name" = "APPLICATIONS";}' \
          '{"enabled" = 1;"name" = "SYSTEM_PREFS";}' \
          '{"enabled" = 1;"name" = "DIRECTORIES";}' \
          '{"enabled" = 1;"name" = "PDF";}' \
          '{"enabled" = 1;"name" = "FONTS";}' \
          '{"enabled" = 0;"name" = "DOCUMENTS";}' \
          '{"enabled" = 0;"name" = "MESSAGES";}' \
          '{"enabled" = 0;"name" = "CONTACT";}' \
          '{"enabled" = 0;"name" = "EVENT_TODO";}' \
          '{"enabled" = 0;"name" = "IMAGES";}' \
          '{"enabled" = 0;"name" = "BOOKMARKS";}' \
          '{"enabled" = 0;"name" = "MUSIC";}' \
          '{"enabled" = 0;"name" = "MOVIES";}' \
          '{"enabled" = 0;"name" = "PRESENTATIONS";}' \
          '{"enabled" = 0;"name" = "SPREADSHEETS";}' \
          '{"enabled" = 0;"name" = "SOURCE";}' \
          '{"enabled" = 0;"name" = "MENU_DEFINITION";}' \
          '{"enabled" = 0;"name" = "MENU_OTHER";}' \
          '{"enabled" = 0;"name" = "MENU_CONVERSION";}' \
          '{"enabled" = 0;"name" = "MENU_EXPRESSION";}' \
          '{"enabled" = 0;"name" = "MENU_WEBSEARCH";}' \
          '{"enabled" = 0;"name" = "MENU_SPOTLIGHT_SUGGESTIONS";}'
      
    • Spotlight: Load new settings before rebuilding the index
      killall mds > /dev/null 2>&1

    • Spotlight: Make sure indexing is enabled for the main volume
      sudo mdutil -i on / > /dev/null

    • Spotlight: Rebuild the index from scratch
      sudo mdutil -E / > /dev/null

    • Terminal: Only use UTF-8 in Terminal.app
      defaults write com.apple.terminal StringEncodings -array 4

    • Terminal: Enable Secure Keyboard Entry in Terminal.app
      defaults write com.apple.terminal SecureKeyboardEntry -bool true

    • Terminal: Disable the annoying line marks
      defaults write com.apple.Terminal ShowLineMarks -int 0

    • Time Machine: Prevent Time Machine from prompting to use new hard drives as backup volume
      defaults write com.apple.TimeMachine DoNotOfferNewDisksForBackup -bool true

    • Time Machine: Disable local Time Machine backups
      hash tmutil &> /dev/null && sudo tmutil disablelocal

    • Activity Monitor: Show the main window when launching Activity Monitor
      defaults write com.apple.ActivityMonitor OpenMainWindow -bool true

    • Activity Monitor: Visualize CPU usage in the Activity Monitor Dock icon
      defaults write com.apple.ActivityMonitor IconType -int 5

    • Activity Monitor: Show all processes in Activity Monitor
      defaults write com.apple.ActivityMonitor ShowCategory -int 0

    • Activity Monitor: Sort Activity Monitor results by CPU usage
      defaults write com.apple.ActivityMonitor SortColumn -string "CPUUsage" defaults write com.apple.ActivityMonitor SortDirection -int 0

    Contributing

    Want to contribute? Follow these recommendations.

    License

    Designed with ♥ by CJ Patoilo. Licensed under the MIT License.

    Visit original content creator repository https://github.com/cjpatoilo/dotfiles
  • pygmc

    PyGMC

    PyPI - Version GitHub Actions Workflow Status Read the Docs codecov PyPI Monthly Downloads

    PyGMC is a Python API for Geiger–Müller Counters (GMCs) / Geiger Counters. It has just one dependency (pyserial) and works on multiple operating systems: Windows, OSX, Linux. PyGMC aims to be a minimalistic interface – lowering the installation requirements and allowing the user to build their own tools on top of a stable package.

    Installation

    pip install pygmc
    conda install conda-forge::pygmc

    Conda PyGMC version may lag latest PyPI version.

    Example Usage

    Jupyter Notebook

    Auto discover connected GMC, auto identify baudrate, and auto select correct device.

    import pygmc
    
    gc = pygmc.connect()
    
    ver = gc.get_version()
    print(ver)
    
    cpm = gc.get_cpm()
    print(cpm)

    Connect to specified GMC device with exact USB port/device/com.

    import pygmc
    
    gc = pygmc.GMC320('/dev/ttyUSB0')
    
    cpm = gc.get_cpm()
    print(cpm)

    Read device history into DataFrame

    import pandas as pd
    import pygmc
    
    gc = pygmc.GMC320('/dev/ttyUSB0')
    
    history = gc.get_history_data()
    df = pd.DataFrame(history[1:], columns=history[0])
    datetime count unit mode reference_datetime notes
    2023-04-19 20:37:18 11 CPM every minute 2023-04-19 20:36:18
    2023-04-19 20:38:18 20 CPM every minute 2023-04-19 20:36:18
    2023-04-19 20:39:18 19 CPM every minute 2023-04-19 20:36:18
    2023-04-19 20:40:18 23 CPM every minute 2023-04-19 20:36:18
    2023-04-19 20:41:18 20 CPM every minute 2023-04-19 20:36:18

    Devices

    Device Brand Notes
    GMC-300S ✔️✔️ GQ Electronics A little picky
    GMC-300E+ / GMC-300E Plus GQ Electronics
    GMC-320+ / GMC-320 Plus ✔️✔️ GQ Electronics Works smoothly
    GMC-320S GQ Electronics
    GMC-500 GQ Electronics
    GMC-500+ / GMC-500 Plus ✔️✔️ GQ Electronics Works smoothly
    GMC-600 GQ Electronics
    GMC-600+ / GMC-600 Plus ✔️✔️ GQ Electronics
    GMC-800 ✔️✔️ GQ Electronics *Finally Working
    GMC-SE ✔️ GQ Electronics RFC1201

    ✔️✔️=physically confirmed works
    ✔️=user confirmed works
    *Incorrect documentation caused incorrect implementation with pygmc<=0.10.0

    Contributors

    Notes

    • Alternative Python projects for GQ GMC:
    • Device website GQ Electronics Seattle, WA
      • Not affiliated in any way.

    Known Issues

    • Ubuntu Issue
      • Ubuntu requires fixing a bug to be able to connect to any GQ GMC device.
        USB devices use VID (vendor ID) and PID (Divice ID)… It is common for unrelated devices to use a common manufacture for their USB interface. The issue with Ubuntu is that it assumes 1A86:7523 is a “Braille” device (for the blind) and, ironically, blindly treats it as such.
      • This causes the GQ GMC device to not connect.
    • Ubuntu fix
      • The fix is to comment out the udev rule that does this. The text file may be in two places.
        • /usr/lib/udev/85-brltty.rules
        • /usr/lib/udev/rules.d/85-brltty.rules
      • Find the line below and comment it out.
        • ENV{PRODUCT}=="1a86/7523/*", ENV{BRLTTY_BRAILLE_DRIVER}="bm", GOTO="brltty_usb_run"
      • We see Ubuntu assumes 1A86:7523 is a Baum [NLS eReader Zoomax (20 cells)] device.
    Visit original content creator repository https://github.com/Wikilicious/pygmc
  • terraform-provider-rediscloud

    Terraform Provider Redis Cloud

    The Redis Enterprise Cloud Terraform provider is a plugin for Terraform that allows Redis Enterprise Cloud customers to manage the full
    lifecycle of their subscriptions and related Redis databases.

    Requirements

    Quick Starts

    To use the Redis Enterprise Cloud Terraform provider you will need to set the following environment variables,
    and these are created through the Redis Enterprise Cloud console under the settings menu.

    • REDISCLOUD_ACCESS_KEY – Account Cloud API Access Key
    • REDISCLOUD_SECRET_KEY – Individual user Cloud API Secret Key

    Developing the Provider

    If you wish to work on the provider, you’ll first need Go installed on your machine (see Requirements above).
    You will also need to create or have access to a Redis Cloud Enterprise account.

    Building the Provider

    1. Clone the repository
    2. Enter the repository directory
    3. Build the provider using the make build command:
    $ make build

    The make build command will build a local provider binary into a bin directory at the root of the repository.

    Installing the Provider

    After the provider has been built locally it must be placed in the user plugins directory so it can be discovered by the
    Terraform CLI. The default user plugins directory root is ~/.terraform.d/plugins.

    Use the following make command to install the provider locally.

    $ make install_local

    The provider will now be installed in the following location ready to be used by Terraform

    ~/.terraform.d/plugins
    └── registry.terraform.io
        └── RedisLabs
            └── rediscloud
                └── 99.99.99
                    └── <OS>_<ARCH>
                        └── terraform-provider-rediscloud_v99.99.99
    

    The provider binary is built using a version number of 99.99.99 and this will allow Terraform to use the locally
    built provider over a released version.

    The terraform provider is installed and can now be discovered by Terraform through the following HCL block.

    terraform {
      required_providers {
        rediscloud = {
          source = "RedisLabs/rediscloud"
        }
      }
      required_version = "~> 1.2"
    }
    

    The following is an example of using the rediscloud_regions data-source to discover a list of supported regions. It can be
    used to verify that the provider is set up and installed correctly without incurring the cost of subscriptions and databases.

    data "rediscloud_regions" "example" {
    }
    
    output "all_regions" {
      value = data.rediscloud_regions.example.regions
    }
    

    Testing the Provider

    In order to run the full suite of Acceptance tests, run make testacc.

    Note: Acceptance tests create real resources, and often cost money to run.

    $ make testacc

    In order to run an individual acceptance test, the ‘-run’ flag can be used together with a regular expression.
    The following example uses a regular expression matching single test called ‘TestAccResourceRedisCloudSubscription_createWithDatabase’.

    $ make testacc TESTARGS='-run=TestAccResourceRedisCloudSubscription_createWithDatabase'

    In order to run the tests with extra debugging context, prefix the make command with TF_LOG (see the terraform documentation for details).

    $ TF_LOG=trace make testacc

    By default, the tests run with a parallelism of 3. This can be reduced if some tests are failing due to network-related
    issues, or increased if possible, to reduce the running time of the tests. Prefix the make command with TEST_PARALLELISM,
    as in the following example, to configure this.

    $ TEST_PARALLELISM=2 make testacc

    A core set of Acceptance tests are executed through the build pipeline, (considered short tests).
    Functionality that requires additional setup or environment variables can be executed using the following flags.

    Flag Description
    -tls Allows execution of TLS based acceptance tests
    -contract Allows execution of contract payment method tests

    Adding Dependencies

    This provider uses Go modules.
    Please see the Go documentation for the most up-to-date information about using Go modules.

    To add a new dependency github.com/author/dependency to your Terraform provider:

    go get github.com/author/dependency
    go mod tidy
    

    Then commit the changes to go.mod and go.sum.

    Releasing the Provider

    The steps to release a provider are:

    1. Decide what the next version number will be. As this provider tries to follow semantic versioning, the best strategy would be to look at the previous release number and decide whether the MAJOR, MINOR or PATCH version should be incremented.
    2. Create a new tag on your local copy of this Git repository in the format of vMAJOR.MINOR.PATCH, where MAJOR.MINOR.PATCH is the version number you settled on in the previous step.
    3. Push the tag from your local copy to GitHub. This will trigger the release GitHub Action workflow that will create the release for you.
    4. While you are waiting for GitHub to finish building the release, update the CHANGELOG with what has been added, fixed and changed in this release.
    5. Once the release workflow has finished, the Terraform Registry will eventually spot the new version and update the registry page – this may take a few minutes.

    Visit original content creator repository
    https://github.com/RedisLabs/terraform-provider-rediscloud