Blog

  • xgbmr

    Implémentation en R d’un modèle de micro-réserve utilisant l’algorithme XGBoost.

    Ce package a pour objectif de contenir tout le matériel nécessaire afin de pouvoir reproduire le projet de recherche de Gabriel Crépeault-Cauchon effectué pour le cours ACT-2101 au cours de la session d’automne 2019 à l’Université Laval.

    Résumé du projet de recherche

    Le projet de recherche avait pour but de reproduire le modèle de F. Duval et M. Pigeon (2019) , qui propose d’utiliser l’algorithme XGBoost afin de prédire le montant payé à l’ultime d’une réclamation en assurance, afin de pouvoir faire une estimation de la réserve individuelle à appliquer. Dans le cadre du projet de recherche, le modèle a été implémenté avec R sur un portefeuille de réclamations simulé à l’aide d’un réseau de neurones (Gabrielli et Wuthrich, 2018).

    Installation du package

    Pour l’instant, le package n’est pas accessible depuis le CRAN. Il est toutefois possible de le télécharger sur votre session R avec la commande suivante :

    devtools::install_github(repo = "gabrielcrepeault/xgbmr")

    Notes sur la documentation

    Dans l’éventualité où ce package pourrait être publié sur le CRAN, la partie de la documentation des différents éléments du package a été faite en anglais. Ce site web sert uniquement à supporter le rapport du projet de recherche.

    Éléments contenus dans le package xgbmr

    • Fonctions personnalisées, principalement utilisée dans les chapitres implémentation et résultats
    • Fonction helper qui permettent de faciliter le tuning d’un modèle XGBoost. Ces fonctions peuvent être réutilisés pour ajuster un modèle XGBoost dans un tout autre contexte.
    • Application shiny Black Box Explain, qui utilise les principales fonctions du package iml afin de visualiser (de façon interactive) l’interprétabilité du modèle (Feature importance, PDP/ICE curves, LIME models, pred-on-truth graphs, etc).

    Références

    • Duval, F., & Pigeon, M. (2019). Individual loss reserving using a gradient boosting-based approach. Risks, 7(3), 79.
    • Gabrielli, A., & V Wüthrich, M. (2018). An individual claims history simulation machine. Risks, 6(2), 29.

    Visit original content creator repository
    https://github.com/gabrielcrepeault/xgbmr

  • Footprint

    Footprint

    Build Status Coverage Total Downloads License

    This plugin allows you to pass the currently logged in user info to the model layer of a CakePHP application.

    It comes bundled with the FootprintBehavior to allow you control over columns such as user_id, created_by, company_id similar to the core’s TimestampBehavior.

    Install

    Using Composer:

    composer require muffin/footprint

    You then need to load the plugin by running console command:

    bin/cake plugin load Muffin/Footprint

    The Footprint plugin must be loaded before the Authentication plugin, so you should updated your config/plugins.php or Application::bootstrap() accordingly.

    Usage

    Middleware

    Add the FootprintMiddleware to the middleware queue in your Application::middleware() method:

    $middleware->add('Muffin/Footprint.Footprint');

    It must be added after AuthenticationMiddleware to ensure that it can read the identify info after authentication is done.

    If you don’t have direct access to the place where AuthenticationMiddleware is added then check here.

    Behavior

    To use the included behavior to automatically update the created_by and modified_by fields of a record for example, add the following to your table’s initialize() method:

    $this->addBehavior('Muffin/Footprint.Footprint');

    You can customize that like so:

    $this->addBehavior('Muffin/Footprint.Footprint', [
        'events' => [
            'Model.beforeSave' => [
                'user_id' => 'new',
                'company_id' => 'new',
                'modified_by' => 'always'
            ]
        ],
        'propertiesMap' => [
            'company_id' => '_footprint.company.id',
        ],
    ]);

    This will insert the currently logged in user’s primary key in user_id and modified_by fields when creating a record, on the modified_by field again when updating the record and it will use the associated user record’s company id in the company_id field when creating a record.

    You can also provide a closure that accepts an EntityInterface and returns a bool:

    $this->addBehavior('Muffin/Footprint.Footprint', [
        'events' => [
            'Model.beforeSave' => [
                'user_id' => 'new',
                'company_id' => 'new',
                'modified_by' => 'always',
                'deleted_by' => function ($entity): bool {
                    return $entity->deleted !== null;
                },
            ]
        ],
    ]);

    Adding middleware via event

    In some cases you don’t have direct access to the place where the AuthenticationMiddleware is added. Then you will have to add this to your src/Application.php

    use Authentication\Middleware\AuthenticationMiddleware;
    use Cake\Event\EventInterface;
    use Cake\Http\MiddlewareQueue;
    use Muffin\Footprint\Middleware\FootprintMiddleware;
    
    // inside the bootstrap() method
    $this->getEventManager()->on(
        'Server.buildMiddleware',
        function (EventInterface $event, MiddlewareQueue $middleware) {
            $middleware->insertAfter(AuthenticationMiddleware::class, FootprintMiddleware::class);
        }
    );

    Patches & Features

    • Fork
    • Mod, fix
    • Test – this is important, so it’s not unintentionally broken
    • Commit – do not mess with license, todo, version, etc. (if you do change any, bump them into commits of their own that I can ignore when I pull)
    • Pull request – bonus point for topic branches

    Bugs & Feedback

    http://github.com/usemuffin/footprint/issues

    License

    Copyright (c) 2015-Present, Use Muffin and licensed under The MIT License.

    Visit original content creator repository https://github.com/UseMuffin/Footprint
  • countries-finder

    Countries Finder (Theme switcher & lazy load).

    This is a solution to the REST Countries API with color theme switcher challenge on Frontend Mentor.

    Table of contents

    Overview

    The challenge

    Users should be able to:

    • See all countries from the API on the homepage
    • Search for a country using an input field
    • Filter countries by region
    • Click on a country to see more detailed information on a separate page
    • Click through to the border countries on the detail page
    • Toggle the color scheme between light and dark mode

    Links

    Videos

    Browser.mp4
    Mobile.mp4

    My process

    Built with

    React React Router Redux Styled Components

    • Lazy load with Intersection Observer.
    • Mobile-first workflow
    • Semantic HTML5 markup
    • Flexbox for components.

    What I practiced

    • Made use of localStorage with redux in order to preload and store a theme
    const getTheme = getLSThemeMode()
    const INITIAL_STATE = {
      app: {
        themeMode: getTheme || 'darkMode',
        ...
      },
    }
    • Made use of custom hooks in order to handle the data
      const [loading, error, countries, getCountries] = useCountriesState()
    • Redux: changed “switch” for a literal object in order to handle “reducers”
    export const app = (state = INITIAL_STATE, action) => {
      return (
        {
          '@app/themeMode': { ...state, themeMode: action.payload },
          ...
        }[action.type] || state
      )
    }

    Author

    Visit original content creator repository https://github.com/Ivanricee/countries-finder
  • MarCustomDjangoTemplate

    Table of contents:


    Author Note: You may use this Custom-Django Template I made, which is simplified, and I found efficient when handling you're own API's.

    Status

    1. Has admin (currently hidden by default)
    2. Integrated HTMX (removed)
    3. Used component library for the frontend main css framework
    4. Has settings

    Usage

    1. you need a database inside – application/_core/database/[“db.sqlite3]

    2. you need to have .env just outside the application folder.

      where it should contains:

      APP_NAME=app_name
      SECRET_KEY=secret_key
      ALLOWED_HOSTS=weblink.com localhost 127.0.0.1
      DEBUG=True or False
      DATABASE_URL=""
    3. database

      just create a folder inside _core then database

      application/_core/database

      then run: python manage.py migrate

      you may add now your models or super admin

    Tech Used

    1. Nodejs

      • daisyui
      • tailwindcss
      • webpack
      • theme-change
    2. Page Animation

    3. DaisyUI

      Note: Component libary for tailwind css to help me write lesser codes in the html.
      here is the official website.

    References

    1. Skeleton Tutorial

    Settings

    1. (will be added) Font Size – Allow users to increase or decrease the font size according to their comfort.
    2. (will be added) Contrast Settings – Provide options to adjust color contrast, especially between text and background.
    3. (done)Color Theme – Allow users to choose from different color themes or create a high-contrast mode.
    4. (will be added) Line Spacing – Enable users to adjust line spacing for better readability.
    5. (will be added) Font Type – Offer different font options, considering readability for users with visual impairments.
    6. (will not be included) Text-to-Speech (TTS) – Include a text-to-speech feature that allows users to listen to the content.
    7. (will not be included) Keyboard Shortcuts – Provide users the ability to customize keyboard shortcuts for navigation.
    8. (will not be included) Animations and Transitions – Allow users to control or disable animations and transitions to reduce motion sensitivity.
    9. (will not be included) Cursor Size and Color – Let users customize the size and color of the cursor for better visibility.
    10. (will not be included) Background Images – Allow users to toggle background images on or off to reduce visual clutter.
    11. (will not be included) Link Styles – Provide options to customize link styles, such as underlining or bolding.
    12. (will not be included) Audio Descriptions – Include an option for users to enable or disable audio descriptions for multimedia content.
    13. (will not be included) Closed Captions – Allow users to customize closed caption settings, including font size and color.
    14. (will not be included) Focus Indicator – Let users customize the focus indicator style and color for keyboard navigation.
    15. (will not be included) Time Delays – For interactive elements, allow users to adjust time delays for tooltips or pop-ups.
    16. (will not be included) Language Preferences – Allow users to select their preferred language for content.
    17. (will not be included) Reading Mode – Include a reading mode that simplifies the layout and focuses on the main content.
    18. (will not be included) Skip Navigation – Provide an option to show or hide the “skip to content” link.
    19. (done) Reset to Defaults – Include a button that allows users to reset all settings to default.
    20. (will be added) Help and Guidance – Include tooltips or guidance for each setting to explain its impact.

    Pages

    1. Landing Page (index)
    2. Home
    3. About
    4. Evaluate

    Visit original content creator repository
    https://github.com/joemar25/MarCustomDjangoTemplate

  • django-modernize

    Open-source Django project crafted on top of Modernize, an open-source Bootstrap 5 design from AdminMart. The product is designed to deliver the best possible user experience with highly customizable feature-rich pages. Modernize has easy and intuitive responsive design whether it is viewed on retina screens or laptops.


    Features:

    • Up-to-date Dependencies
    • ✅ Theme: Django Admin Modernize, designed by AdminMart
      • can be used in any Django project (new or legacy)
    • Authentication: Django.contrib.AUTH, Registration
    • 🚀 Deployment
      • CI/CD flow via Render

    Modernize - Bootstrap 5 design


    Manual Build

    👉 Download the code

    $ git clone https://github.com/app-generator/django-modernize.git
    $ cd django-modernize

    👉 Install modules via VENV

    $ virtualenv env
    $ source env/bin/activate
    $ pip install -r requirements.txt

    👉 Set Up Database

    $ python manage.py makemigrations
    $ python manage.py migrate

    👉 Create the Superuser

    $ python manage.py createsuperuser

    👉 Start the app

    $ python manage.py runserver

    At this point, the app runs at http://127.0.0.1:8000/.


    Codebase structure

    The project is coded using a simple and intuitive structure presented below:

    < PROJECT ROOT >
       |
       |-- config/                            
       |    |-- settings.py                  # Project Configuration  
       |    |-- urls.py                      # Project Routing
       |
       |-- home/
       |    |-- views.py                     # APP Views 
       |    |-- urls.py                      # APP Routing
       |    |-- models.py                    # APP Models 
       |    |-- tests.py                     # Tests  
       |    |-- templates/                   # Theme Customisation 
       |         |-- includes                # 
       |              |-- custom-footer.py   # Custom Footer      
       |     
       |-- requirements.txt                  # Project Dependencies
       |
       |-- env.sample                        # ENV Configuration (default values)
       |-- manage.py                         # Start the app - Django default start script
       |
       |-- ************************************************************************

    Deploy on Render

    • Create a Blueprint instance
    • Click New Blueprint Instance button.
    • Connect your repo which you want to deploy.
    • Fill the Service Group Name and click on Update Existing Resources button.
    • After that your deployment will start automatically.

    At this point, the product should be LIVE.



    Django Modernize – Minimal Django core provided by App Generator.

    Visit original content creator repository https://github.com/app-generator/django-modernize
  • fastboot.js

    Visit original content creator repository
    https://github.com/Katya-Incorporated/fastboot.js

  • django-fav

    django-fav

    A simple reusable app for django that makes it easy to deal with faving
    and unfaving any object from any application.

    It comes with a Graphene (GraphQL) Query to enable favs in your queries.

    Requirements

    • Python 3.4+
    • Django 1.11

    Installation

    pip install django-fav
    
    • Add the app to your settings.py

    INSTALLED_APPS = [
      ...
      "fav",
      ...
    ]
    • Sync your database:
    python manage.py migrate
    

    Usage:

    Favorites Manager

    • Create a Favorite instance for a user and object:

    >>> from django.contrib.auth.models import User
    >>> from music.models import Song
    >>> user = User.objects.get(username='gengue')
    >>> song = Song.objects.get(pk=1)
    >>> fav = Favorite.objects.create(user, song)
    or:
    
    >>> fav = Favorite.objects.create(user, 1, Song)
    or:
    
    >>> fav = Favorite.objects.create(user, 1, "music.Song")
    • Get the objects favorited by a given user:

    >>> from django.contrib.auth.models import User
    >>> user = User.objects.get(username='gengue')
    >>> Favorite.objects.for_user(user)
    >>> [<Favorite: Favorite object 1>, <Favorite: Favorite object 2>, <Favorite: Favorite object 3>]
    • Now, get user favorited objects belonging to a given model:

    >>> from django.contrib.auth.models import User
    >>> from music.models import Song
    >>> user = User.objects.get(username='gengue')
    >>> Favorite.objects.for_user(user, model=Song)
    >>> [<Favorite: Favorite object 1>, <Favorite: Favorite object 2>, <Favorite: Favorite object 3>]
    • Get the favorited object instances of a given model favorited by any user:

    >>> from music.models import Song
    >>> Favorite.objects.for_model(Song)
    >>> [<Favorite: Favorite object 1>, <Favorite: Favorite object 2>, <Favorite: Favorite object 3>]
    • Get a Favorite instance for a given object and user:

    >>> from django.contrib.auth.models import User
    >>> from music.models import Song
    >>> user = User.objects.get(username='gengue')
    >>> song = Song.objects.get(pk=1)
    >>> fav = Favorite.objects.get_favorite(user, song)
    • Get all Favorite instances for a given object

    >>> from music.models import Song
    >>> song = Song.objects.get(pk=1)
    >>> fav = Favorite.objects.for_object(song)

    Graphql

    In settings.py, map your grahene queries to your django models

    FAV_MODELS = {
        'CurrentUser': 'core.user',
        'User': 'core.user',
        'Track': 'listen.Track',
    }

    Add url_renditions.graphql_schema.Query to your root query and mutation.

    import graphene
    import fav.graphql_schema
    
    class Query(
            ...
            fav.graphql_schema.Query,
            graphene.ObjectType):
        pass
    
    class Mutation(
            ...
            fav.graphql_schema.Mutation,
            graphene.ObjectType):
        pass
    
    
    schema = graphene.Schema(query=Query, mutation=Mutation)

    Query

    Then, you can ask for:

    query {
      isInUserFavorites(objectId: "VHJhY2s6OA==")
    }

    and you get

    {
      "data": {
        "isInUserFavorites": false
      }
    }

    Mutation

    mutation {
      favorite(input: {objectId: "VHJhY2s6OA=="}) {
        deleted
        created
      }
    }

    and you get

    {
      "data": {
        "favorite": {
          "deleted": null,
          "created": true,
        }
      }
    }

    Thanks

    Visit original content creator repository
    https://github.com/vied12/django-fav

  • auto-assign

    Auto assign

    License

    Overview

    GitHub action that automatically assigns issues and pull requests to specified assignees.

    How to use

    Before configuring your .yml file, let’s understand the configuration parameters.

    Parameter Type Required Default Description
    assignees string Yes N/A Comma-separated list of usernames. Assignments will be made to them.
    github_token string Yes N/A GitHub app installation access token.
    allow_self_assign boolean No True Flag that allows self-assignment to an issue or pull request.
    allow_no_assignees boolean No False Flag that prevents the action from failing when there are no assignees.
    assignment_options string No ISSUE Assignment options in a GitHub action related to automatically assigning issues and pull requests.

    By default, write permission allows the GitHub action only to create and edit issues in public repositories. You must use admin permission or a more restricted setting for private repositories. You can generate a personal access token with the required scopes.

    Working only with issues

    Example of how to configure your .yml file to auto-assign users only for issues.

    name: Auto assign issues
    
    on:
      issues:
        types:
          - opened
    
    jobs:
      run:
        runs-on: ubuntu-latest
        permissions:
          issues: write
        steps:
          - name: Assign issues
            uses: gustavofreze/auto-assign@1.0.0
            with:
              assignees: 'user1,user2'
              github_token: '${{ secrets.GITHUB_TOKEN }}'
              assignment_options: 'ISSUE'

    Working only with pull request

    Example of how to configure your .yml file to auto-assign users only for pull requests.

    name: Auto assign pull requests
    
    on:
      pull_request:
        types:
          - opened
    
    jobs:
      run:
        runs-on: ubuntu-latest
        permissions:
          pull-requests: write
        steps:
          - name: Assign pull requests
            uses: gustavofreze/auto-assign@1.0.0
            with:
              assignees: 'user1,user2'
              github_token: '${{ secrets.GITHUB_TOKEN }}'
              assignment_options: 'PULL_REQUEST'

    Working with issues and pull requests

    Example of how to configure your .yml file to auto-assign users for issues and pull requests.

    name: Auto assign issues and pull requests
    
    on:
      issues:
        types:
          - opened
      pull_request:
        types:
          - opened
    
    jobs:
      run:
        runs-on: ubuntu-latest
        permissions:
          issues: write
          pull-requests: write
        steps:
          - name: Assign issues and pull requests
            uses: gustavofreze/auto-assign@1.0.0
            with:
              assignees: 'user1,user2'
              github_token: '${{ secrets.GITHUB_TOKEN }}'
              assignment_options: 'ISSUE,PULL_REQUEST'

    Working with issues and pull requests in a non-restrictive way

    Example of configuring your .yml file to automatically assign users for issues and pull requests.

    The difference in approach consists of the following:

    • If the only assignable user were the one who started the workflow, it would be assigned.
    • No items will be assigned if users are not assignable, including those who started the workflow. However, no error will occur.
    name: Auto assign issues and pull requests
    
    on:
      issues:
        types:
          - opened
      pull_request:
        types:
          - opened
    
    jobs:
      run:
        runs-on: ubuntu-latest
        permissions:
          issues: write
          pull-requests: write
        steps:
          - name: Assign issues and pull requests
            uses: gustavofreze/auto-assign@1.0.0
            with:
              assignees: 'user1,user2'
              github_token: '${{ secrets.GITHUB_TOKEN }}'
              allow_self_assign: 'true'
              allow_no_assignees: 'true'
              assignment_options: 'ISSUE,PULL_REQUEST'

    License

    Auto-assign is licensed under MIT.

    Contributing

    Please follow the contributing guidelines to contribute to the project.

    Visit original content creator repository https://github.com/gustavofreze/auto-assign
  • OffLoader

    OffLoader

    It is a simple loader of files OFF (Object File Format) written in C++ using OpenGL. The code read the file which is totally basic and contains only triangles, i.e. 3 vertexes by triangle; and no color per vertex/triangle. In order to support quads or more, it is necessary the subdivision of the primitives.

    The data structure used is very simple. It is based on stores the number of vertexes, triangles and two dynamic arrays (you can change them by vectors). Also, the base data type such as point3f and others were performed using the OpenGL Mathematics (http://glm.g-truc.net/).

    The code includes the GLM, FreeGLUT (including dll/lib) and a simple .OFF file to test. There are several pages which have .OFF file to download. It works for Visual Studio compiler because its using #pragma comment to link the .lib of freeglut.

    I REMEMBER AGAIN: this code is only to be used as a template. There is no efficiency inside the code, Actually, it includes the slower and deprecated way to render in OpenGL: glBegin/glEnd.

    Visit original content creator repository
    https://github.com/esmitt/OffLoader

  • dataloader

    Dataloader

    Build Status codecov

    Dataloader is a generic utility to be used as part of your application’s data fetching layer to provide a simplified and consistent API to perform batching and caching within a request. It is heavily inspired by Facebook’s dataloader.

    Getting started

    First, install Dataloader using bundler:

    gem "dataloader"

    To get started, instantiate Dataloader. Each Dataloader instance represents a unique cache. Typically instances are created per request when used within a web-server. To see how to use with GraphQL server, see section below.

    Dataloader is dependent on promise.rb (Promise class) which you can use freely for batch-ready code (e.g. loader can return Promise that returns a Promise that returns a Promise). Dataloader will try to batch most of them.

    Basic usage

    # It will be called only once with ids = [0, 1, 2]
    loader = Dataloader.new do |ids|
      User.find(*ids)
    end
    
    # Schedule data to load
    promise_one = loader.load(0)
    promise_two = loader.load_many([1, 2])
    
    # Get promises results
    user0 = promise_one.sync
    user1, user2 = promise_two.sync

    Using with GraphQL

    You can pass loaders passed inside context.

    UserType = GraphQL::ObjectType.define do
      field :name, types.String
    end
    
    QueryType = GraphQL::ObjectType.define do
      name "Query"
      description "The query root of this schema"
    
      field :user do
        type UserType
        argument :id, !types.ID
        resolve ->(obj, args, ctx) {
          ctx[:user_loader].load(args["id"])
        }
      end
    end
    
    Schema = GraphQL::Schema.define do
      lazy_resolve(Promise, :sync)
    
      query QueryType
    end
    
    context = {
      user_loader: Dataloader.new do |ids|
        User.find(*ids)
      end
    }
    
    Schema.execute("{ user(id: 12) { name } }", context: context)

    Batching

    You can create loaders by providing a batch loading function.

    user_loader = Dataloader.new { |ids| User.find(*ids) }

    A batch loading block accepts an Array of keys, and returns a Promise which resolves to an Array or Hash of values.

    Dataloader will coalesce all individual loads which occur until first .sync is called on any promise returned by #load or #load_many, and then call your batch function with all requested keys.

    user_loader.load(1)
      .then { |user| user_loader.load(user.invited_by_id)) }
      .then { |invited_by| "User 1 was invited by ${invited_by[:name]}" }
    
    # Elsewhere in your backend
    user_loader.load(2)
      .then { |user| user_loader.load(user.invited_by_id)) }
      .then { |invited_by| "User 2 was invited by ${invited_by[:name]}" }

    A naive solution is to issue four SQL queries to get required information, but with Dataloader this application will make at most two queries (one to load users, and second one to load invites).

    Dataloader allows you to decouple unrelated parts of your application without sacrificing the performance of batch data-loading. While the loader presents an API that loads individual values, all concurrent requests will be coalesced and presented to your batch loading function. This allows your application to safely distribute data fetching requirements throughout your application and maintain minimal outgoing data requests.

    Batch function

    A batch loading function accepts an Array of keys, and returns Array of values or Hash that maps from keys to values (or a Promise that returns such Array or Hash). There are a few constraints that must be upheld:

    • The Array of values must be the same length as the Array of keys.
    • Each index in the Array of values must correspond to the same index in the Array of keys.
    • If Hash is returned, it must include all keys passed to batch loading function

    For example, if your batch function was provided the Array of keys: [ 2, 9, 6 ], you could return one of following:

    [
      { id: 2, name: "foo" },
      { id: 9, name: "bar" },
      { id: 6, name: "baz" }
    ]
    {
      2 => { id: 2, name: "foo" },
      9 => { id: 9, name: "bar" },
      6 => { id: 6, name: "baz" }
    }

    Caching

    Dataloader provides a memoization cache for all loads which occur withing single instance of it. After #load is called once with a given key, the resulting Promise is cached to eliminate redundant loads.

    In addition to relieving pressure on your data storage, caching results per-request also creates fewer objects which may relieve memory pressure on your application:

    promise1 = user_loader.load(1)
    promise2 = user_loader.load(1)
    promise1 == promise2 # => true
    

    Caching per-request

    Dataloader caching does not replace Redis, Memcache, or any other shared application-level cache. DataLoader is first and foremost a data loading mechanism, and its cache only serves the purpose of not repeatedly loading the same data in the context of a single request to your Application. To do this, it maintains a simple in-memory memoization cache (more accurately: #load is a memoized function).

    Avoid multiple requests from different users using the same Dataloader instance, which could result in cached data incorrectly appearing in each request. Typically, Dataloader instances are created when a request begins, and are not used once the request ends.

    See Using with GraphQL section to see how you can pass dataloader instances using context.

    Caching errors

    If a batch load fails (that is, a batch function throws or returns a rejected Promise), then the requested values will not be cached. However if a batch function returns an Error instance for an individual value, that Error will be cached to avoid frequently loading the same Error.

    In some circumstances you may wish to clear the cache for these individual Errors:

    user_loader.load(1).rescue do |error|
      user_loader.cache.delete(1)
      raise error
    end

    Disabling cache

    In certain uncommon cases, a Dataloader which does not cache may be desirable. Calling Dataloader.new({ cache: nil }) { ... } will ensure that every call to #load will produce a new Promise, and requested keys will not be saved in memory.

    However, when the memoization cache is disabled, your batch function will receive an array of keys which may contain duplicates! Each key will be associated with each call to #load. Your batch loader should provide a value for each instance of the requested key.

    loader = Dataloader.new({ cache: nil }) do |keys|
      puts keys
      some_loading_function(keys)
    end
    
    loader.load('A')
    loader.load('B')
    loader.load('A')
    
    // > [ 'A', 'B', 'A' ]

    API

    Dataloader

    Dataloader is a class for fetching data given unique keys such as the id column (or any other key).

    Each Dataloader instance contains a unique memoized cache. Because of it, it is recommended to use one Datalaoder instance per web request. You can use more long-lived instances, but then you need to take care of manually cleaning the cache.

    You shouldn’t share the same dataloader instance across different threads. This behavior is currently undefined.

    Dataloader.new(options = {}, &batch_load)

    Create a new Dataloader given a batch loading function and options.

    • batch_load: A block which accepts an Array of keys, and returns Array of values or Hash that maps from keys to values (or a Promise that returns such value).
    • options: An optional hash of options:
      • :key A function to produce a cache key for a given load key. Defaults to function { |key| key }. Useful to provide when objects are keys and two similarly shaped objects should be considered equivalent.
      • :cache An instance of cache used for caching of promies. Defaults to Concurrent::Map.new.
        • The only required API is #compute_if_absent(key)).
        • You can pass nil if you want to disable the cache.
        • You can pass pre-populated cache as well. The values can be Promises.
      • :max_batch_size Limits the number of items that get passed in to the batchLoadFn. Defaults to INFINITY. You can pass 1 to disable batching.

    #load(key)

    key [Object] a key to load using batch_load

    Returns a Promise of computed value.

    You can resolve this promise when you actually need the value with promise.sync.

    All calls to #load are batched until the first #sync is encountered. Then is starts batching again, et cetera.

    #load_many(keys)

    keys [Array] list of keys to load using batch_load

    Returns a Promise of array of computed values.

    To give an example, to multiple keys:

    promise = loader.load_many(['a', 'b'])
    object_a, object_b = promise.sync

    This is equivalent to the more verbose:

    promise = Promise.all([loader.load('a'), loader.load('b')])
    object_a, object_b = promise.sync

    #cache

    Returns the internal cache that can be overridden with :cache option (see constructor)

    This field is writable, so you can reset the cache with something like:

    loader.cache = Concurrent::Map.new

    #wait

    Triggers all batched loaders until there are no keys to resolve.

    This method is invoked automatically when the value of any promise is requested with #sync.

    Here is the implementation that Dataloader sets as a default for Promise:

    class Promise
      def wait
        Dataloader.wait
      end
    end

    License

    MIT

    Visit original content creator repository https://github.com/sheerun/dataloader