Blog

  • compendium

    Compendium Gem Version Build Status

    Ruby on Rails framework for making reporting easy.

    Usage

    Compendium is a reporting framework for Rails which makes it easy to create and render reports (with charts and tables). Compendium requires at least Ruby version 2.2.

    A Compendium report is a subclass of Compendium::Report. Reports can be defined using the simple DSL:

    class MyReport < Compendium::Report
      # Options define which parameters your report will accept when being set up.
      # An option is defined with a name, a type, and some settings (ie. default value, choices for radio buttons and
      # dropdowns, etc.)
      option :starting_on, :date, default: -> { Date.today - 1.month }
      option :ending_on, :date, default: -> { Date.today }
      option :currency, :radio, choices: [:USD, :CAD, :GBP]
    
      # By default, queries are converted to SQL and executed instead of returning AR models
      # The query definition block gets the report's current parameters
      # totals: true means that the last row returned should be interpretted as a row of totals
      query :deliveries, totals: true do |params|
        Items.where(delivered: true, purchased_at: (params[:starting_on]..params[:ending_on]))
      end
    
      # Define a filter to modify the results from specified query (in this case :deliveries)
      # For example, this can be useful to translate columns prior to rendering, as it will apply
      # for all render types (table, chart, JSON)
      # Note: A filter can be applied to multiple queries at once
      filter :deliveries do |results, params|
        results.each do |row|
          row['price'] = sprintf('$%.2f', row['price'])
        end
      end
    
      # Define a query which collects data by using AR directly
      query :on_hand_inventory, collect: :active_record do |params|
        Items.where(in_stock: true)
      end
    
      # Define a query that works on another query's result set
      # Note: chart and data are aliases for query
      chart :deliveries_over_time, through: :deliveries do |results|
        results.group_by(&:purchased_at)
      end
    
      # Queries can also be used to drive metrics
      metric :shipping_time, -> results { results.last['shipping_time'] }, through: :deliveries
    end

    Reports can then also be simply instantiated (which is done automatically if using the supplied Compendium::ReportsController):

    report = MyReport.new(starting_on: '2013-06-01')
    report.run(self) # The parameter is the context to run the report in; usually this should be
                     # a controller context so that methods like current_user can be used

    Compendium also comes with a variety of different presenters, for rendering the setup page, and displaying charts (report.render_chart), tables (report.render_table) and metrics for your report. Charting is delegated through a ChartProvider to a charting gem (amcharts.rb is currently supported).

    Report Options

    Report options are defined by the keyword option in your report class. Options must have a name and a type (scalar, boolean, date, dropdown or radio). Additionally, an option can have a default value (given by a proc passed in with the default: key), and validations (via the validates: key).

    In order to specify parameters for the options, pass a hash to MyReport.new. Parameters are available via params:

    r = MyReport.new(starting_on: Date.today - 3.months, ending_on: Date.today)
    r.params
    
    # {
    #   "starting_on"=>Sun, 30 Aug 2015,
    #   "ending_on"=>Mon, 30 Nov 2015,
    # }

    Validation

    If validation is set up on any options, calling valid? on the report will validate any given parameters against the validations set up, and will populate an errors object. All validations provided by ActiveModel::Validations are available.

    class MyReport < Compendium::Report
      options :starting_on, :date, validates: { presence: true }
    end
    
    r = MyReport.new
    r.valid?
    # => false
    
    r.errors
    # => #<ActiveModel::Errors:0x007fe8359cc6b8
    #  @base={"starting_on"=>nil},
    #  @messages={:starting_on=>["This field is required."]}>

    Query types

    Compendium provides a few types of queries in order to make report writing more streamlined.

    Through Queries

    A through query lets you use the results of a previous query (or multiple queries) as the basis of your query. This lets you build on another query or combine multiple query’s results into a single query. It it specified by passing the through: key to query, with a query name or array or query names (as symbols).

    query :dog_sales { |params| Order.where(pet_type: 'dog', created_at: params[:starting_on]..params[:ending_on]) }
    query :cat_sales { |params| Order.where(pet_type: 'cat', created_at: params[:starting_on]..params[:ending_on]) }
    query :bird_sales { |params| Order.where(pet_type: 'bird', created_at: params[:starting_on]..params[:ending_on]) }
    
    query :total_sales, through: [:dog_sales, :cat_sales, :bird_sales] do |results, params|
      # results is a hash with keys :dog_sales, :cat_sales, :bird_sales
    end

    Count Queries

    A count query simplifies creating a query where you want a count (especially per group of something). A count query is specified by adding count: true to the query call.

    query :sales_per_day, count: true do
      Order.group("DATE(created_at)")
    end
    
    # results will look something like
    # { 2015-10-01 => 4, 2015-10-02 => 20, ... }

    Sum Queries

    Like a count query, a sum query is useful for performing an aggregate function on a grouped query, in this case summing the results. A sum query is specified by adding sum: :column_name to the query call.

    query :commission_per_salesperson, sum: 'commission' do
      # assume commission is a numeric column
      Order.group(:employee_id)
    end
    
    # results will be something like
    # { 1 => 840.34, 2 => 1065.02, ... }

    Collection Queries

    Sometimes you’ll want to run a collection over a collection of data; for this, you can use a collection query. A collection query will perform the same query for each element of a hash or array, or for each result of a query. A collection is specified via collection: [...], collection: { ... } or collection: query (note not a symbol but an actual query object).

    Tying into your Rails application

    Compendium has a Rails::Engine, which adds a default controller and some views. If desired, the controller can be subclassed so that filters and the like can be added. The controller (which extends ApplicationController automatically) has two actions: setup (collect options for the report) and run (execute and render the report), with accompanying views. The setup view can be included inside your own view using the render_report_setup method (NOTE: you have to pass local_assigns into it if you want locals to be passed along).

    Routes are not automatically added to your application. In order to do so, you can use the mount_compendium helper within your config/routes.rb file

    mount_compendium at: '/report', controller: 'reports' # controller defaults to compendium/reports

    Rendering report results in other formats

    JSON

    While the default action when running a report is to render a view with the results, Compendium reports can be rendered as JSON. If using the default routes provided by mount_compendium (assuming compendium was mounted at /report), GETing or POSTing to report/report_name.json will return the report results as JSON. You can also collect the results of a single query (instead of the entire report) by GETing or POSTing to report/report_name/query_name.json.

    CSV

    A report can be exported as CSV. In order to enable CSV exports, a query needs to be defined as the exporter for the report. Note that only one query can be exported, because otherwise there’s no way to ensure that the headings are consistent.

    class MyReport < Compendium::Report
      exports :csv, :deliveries # Defines `deliveries` to be the query that is exported to CSV
    end

    Note that if your report class subclasses another, and you want to disable a previously defined exporter, you can with exports :csv, false.

    When a report has a CSV exporter defined, an Export CSV button will appear on the default setup page. You can also directly export using the path /report/:report_name/export.csv (using GET or POST).

    Customization of the query can be done by setting table options for the query. See the Rendering a table section below for more details.

    Displaying Report Results

    Chart Providers

    As of 1.1.0, chart providers have been extracted out of the main repository and are available as their own gems. If you want to render queries as a chart, a chart provider gem is needed.

    If multiple chart providers are installed, you can select the one you wish you use with the following initializer:

    Compendium.configure do |config|
      config.chart_provider = :AmCharts # or any other provider name
    end

    The following providers are available (If you would like to contribute a chart provider, please let me know and I’ll add it to the list):

    Rendering a table

    Note: When table settings are defined for a query, they are applied both to rendering HTML tables, as well as CSV file exports. See Rendering report results in other formats above for more details.

    In addition to charts, you can output a query as a table. When a query is rendered as a table, each row is output with columns in the query order (so you may want to use an explicit select in your query to order the columns as required). If the query is set up with totals: true, a totals row will be added to the bottom of the table.

    In order to customize the table, you can add a table declaration to your report. Each query can have different table settings.

    class MyReport < Compendium::Report
      table :deliveries do
        # The i18n scope to use for any translations can be specified:
        i18n_scope 'reports.my_report'
        
        # Column headings by default are the column name passed through I18n,
        # but can be overridden:
      
        # ... with a block...
        override_heading do |heading|
          # ...
        end
      
        # ... or one at a time...
        override_heading :col, 'My Column'
      
        # Records where a cell is 0 or nil can have the value overridden to something else:
        display_zero_as 'N/A'
        display_nil_as 'NULL'
      
        # You can specify how to format numbers:
        number_format "%0.1f"
      
        # You can also specify formatting on a per-column basis:
        format(:col) do |value|
          "#{(value / 50) * 100}%"
        end
      end
    end

    A query is rendered from a view, and is passed in the view context as the first parameter. Optionally, a block can be passed to override previously defined settings:

    my_query.render_table(self) do
      display_zero_as 'nil' # Override the previous version just for this render
    end

    CSS Classes

    By default, Compendium uses the following four CSS classes when rendering a table:

    Element Element Type Class Name
    Table table results
    Table header tr headings
    Table data tr data
    Table footer (totals) tr totals

    Each class can be overridden when setting up the table:

    my_query.render_table(self) do |t|
      t.table_class   'my_table_class'
      t.header_class  'my_header_class'
      t.row_class     'my_row_class'
      t.totals_class  'my_totals_class'
    end

    Interaction with other gems

    • If accessible_tooltip is present, option notes will be rendered in a tooltip rather than as straight text.

    Installation

    Add this line to your application’s Gemfile:

    gem 'compendium'
    

    And then execute:

    $ bundle
    

    Or install it yourself as:

    $ gem install compendium
    

    Contributing

    1. Fork it
    2. Create your feature branch (git checkout -b my-new-feature)
    3. Commit your changes (git commit -am 'Add some feature')
    4. Push to the branch (git push origin my-new-feature)
    5. Create new Pull Request
    Visit original content creator repository https://github.com/RubyOnWorld/compendium
  • Invoices_System

    Invoice Management System

    Overview

    This project is a Laravel-based Invoice Management System designed to help users create, manage, and track client invoices efficiently. It includes features like partial payments, role-based access control, and dynamic invoice filtering. The project also integrates Laravel Breeze for authentication and SweetAlert2 for handling user alerts.

    Table of Contents

    1. Features
    2. Installation
      1. Docker Setup
      2. Manual Installation (Without Docker)
    3. Registering a User for Local Development
    4. Usage
    5. Technologies Used
    6. Screenshots
    7. Project Structure
    8. License

    Features

    • Invoice Creation and Management: Easily create, update, and manage invoices for clients. Invoice Images Invoices Display

    • Partial Payments: Manage partial payments and automatically update client balances.

    • Role-based Access Control: Admins have full access, while sales users can only view and manage invoices.

    Stock View Sales Feature Accounts Features
    • AJAX Dynamic Data Fetching: Fetch client and product details dynamically while creating or editing invoices.`
    • Print Functionality: Print invoices with a click of a button.

    Print View

    Installation

    Docker Setup

    To run the application using Docker, ensure that Docker is installed on your machine and follow these steps:

    1. Clone the repository:

      git clone https://github.com/Aya-Sherif/Invoices_System.git
    2. Navigate to the project directory:

      cd Invoices_System
    3. Build and run the Docker container:

      docker-compose up --build
    4. Set up the environment file:

      • Copy .env.example to .env and update the environment variables:

        cp .env.example .env
      • Configure your database, mail, and other settings in the .env file.

    5. Run database migrations:

      After starting the container, you need to run the migrations to set up your database. Open another terminal window and execute:

      docker exec -it <container_name> php artisan migrate

      Replace <container_name> with the actual name of your running Docker container. You can find the container name by running docker ps.

    6. Access the application in your web browser at http://localhost:8000.

    7. Admin Access: To access the admin panel, navigate to the /login page (default root is for user access).

    Manual Installation (Without Docker)

    If you prefer running the application without Docker, follow these steps:

    1. Clone the Repository:

      git clone https://github.com/Aya-Sherif/Invoices_System.git
      cd Invoices_System
    2. Install dependencies:

      composer install
      npm install
      npm run dev
    3. Set up the environment file:

      • Copy .env.example to .env and update the environment variables:

        cp .env.example .env
      • Configure your database, mail, and other settings in the .env file.

    4. Migrate the database:

      php artisan migrate
    5. Run the application:

      php artisan serve

    Registering a User for Local Development

    In the production environment, the register page is disabled, but if you’re running this project locally and need access to the admin panel, follow these steps:

    1. Open the routes/auth.php file and uncomment the part that registers a user.

    2. After that, go to /register in your browser and create your admin account by entering your email and password.

    3. Once you have registered, you can log in to the admin panel by navigating to /login.

    Usage

    • After installation, log in as an admin to have full control over the system.
    • Sales, Accounts or Stock users will have restricted access to certain features.
    • Use the print functionality to generate a hard copy of the invoice.

    Technologies Used

    • Laravel 11: Backend framework powering the web application.
    • Bootstrap 3: Front-end framework for responsive design.
    • SweetAlert2: Handles alert and feedback for user interactions.
    • JavaScript: Powers dynamic features such as the slideshow.

    Project Structure

    • app/Services: Contains service classes such as ClientService, ProductService, etc.
    • app/Http/Controllers: Manages routes and core logic of the application.
    • resources/views: Contains Blade templates for the frontend design.
    • database/migrations: Handles database schema migration (without seeders).

    Screenshots

    Here are some screenshots of the application:

    Products Add Product User Messages

    Contributing

    If you would like to contribute to this project, please fork the repository and submit a pull request with your changes.

    License

    This project is open-source under the MIT license.

    Visit original content creator repository https://github.com/Aya-Sherif/Invoices_System
  • next-react-app

    next-react-app

    The problem

    If you want to develop a client-side application with React, you will need some kind of a starter / boilerplate (provided — of course — configuring Webpack isn’t your passion).
    You’d probably reach for Create React App being the most popular one.

    It has some not-so-nice caveats though, including:

    This solution

    By using Next.js toolchain you can get the same benefits as CRA gives you, only without the caveats.

    Plus you get:

    and much more

    For more information, I highly recommend reading Replacing Create React App with the Next.js CLI

    Getting started

    This repo is a simple Next.js starter configured to redirect all requests to the index page, effectively behaving like a SPA.
    Read the official docs for more information.

    Development:

    1. npm run dev to start the development environment

    Production:

    1. npm run build – to build, bundle & export static files
    2. npm start – to start a server to preview the build

    Inspired by @tannerlinsley‘s gist and tweet and another tweet

    Visit original content creator repository
    https://github.com/selrond/next-react-app

  • ndi-2020-bigeight

    SurfClean, par l’équipe BigEight

    surfCleanLogo

    SurfClean, l’application pour les surfeurs qui veulent être propres

    Cette application leur permet, depuis un téléphone ou un ordinateur,

    • de s’inscrire et de se connecter,
    • de se divertir en jouant à un jeu similaire à Among Us, customisé selon leur thème préféré et avec une musique spécialement composée
    • de consulter des statistiques sur la qualité de l’eau de leur plage préférée* ,
    • de consulter les produits récupérés par les autres utilisateurs de Surfclean
    • (non fonctionnel) d’enregistrer une activité via un formulaire, et de déclarer les produits détectés.

    * (à Saint-Malo uniquement)

    L’équipe BigEight

    Fière de représenter la région bisontine, l’équipe Big Eight est composée d’une bande de joyeux lurons :

    Installation de l’application

    • Installer node (v >= 14) et mysql
    • Créer la base de données mysql (possibilité de créer un mode de passe avec openssl rand -base64 16
      CREATE USER 'bigeight'@'localhost' IDENTIFIED BY 'PASSWORD_HERE';
      # si mysql v >= 8.0: ALTER USER 'bigeight'@'localhost' IDENTIFIED WITH mysql_native_password BY 'PASSWORD_HERE'
      CREATE DATABASE bigeight;
      GRANT ALL PRIVILEGES ON bigeight.* TO 'bigeight'@'localhost';
      
    • Créer le fichier de configuration .env à la racine du project
      PORT=12000
      DB_HOST=localhost
      DB_USER=bigeight
      DB_PASSWORD=PASSWORD_HERE
      DB_DATABASE=bigeight
      
    • Démarrer le projet avec node .
    Visit original content creator repository https://github.com/nathanaelhoun/ndi-2020-bigeight
  • scantailor-experimental

    ScanTailor-Experimental

    Based of Scan Tailor – scantailor.org

    ScanTailor logo from scantailor.org

    About

    Scan Tailor is an interactive post-processing tool for scanned pages. It performs operations such as:

    You give it raw scans, and you get pages ready to be printed or assembled into a PDF or DJVU file. Scanning, optical character recognition, and assembling multi-page documents are out of scope of this project.

    Scan Tailor is Free Software (which is more than just freeware). It’s written in C++ with Qt and released under the General Public License version 3. We develop both Windows and GNU/Linux versions.

    History and Future

    This project started in late 2007 and by mid 2010 it reached production quality.

    In 2014, the original developer Joseph Artsimovich stepped aside, and Nate Craun (@ncraun) took over as the new maintainer.

    For information on contributing and the longstanding plan for the project, please see the Roadmap wiki entry.

    For any suggested changes or bugs, please consult the Issues tab.

    Usage

    Scan Tailor is being used not just by enthusiasts, but also by libraries and other institutions. Scan Tailor processed books can be found on Google Books and the Internet Archive.

    • Prolog for Programmers. The 47.3MB pdf is the original, and the 3.1MB pdf is after using Scan Tailor. The OCR, Chapter Indexing, JBIG2 compression, and PDF Binding were not done with Scan Tailor, but all of the scanned image cleanup was. [1]
    • Oakland Township: Two Hundred Years by Stuart A. Rammage (also available: volumes 2, 3, 4.1, 4.2, 5.1, and 5.2) [2]
    • Herons and Cobblestones: A History of Bethel and the Five Oaks Area of Brantford Township, County of Brant by the Grand River Heritage Mines Society [2]

    Installation and Tips

    Scanning Tips, Quick-Start-Guide, and complete Usage Guide, including installation information (via the installer or building from from source) can be found in the wiki!

    Installation on Windows

    On Windows 10 1809 or higher to install Scantailor-Experimental just use command:

    winget install "Scantailor-Experimental"

    You can also download binaries from Release page.

    Additional Links

    Visit original content creator repository https://github.com/ImageProcessing-ElectronicPublications/scantailor-experimental
  • bllflow

    bllflow – an R package for efficient, transparent data prepartion and reporting

    Is bllflow for you?

    • Do you shudder at the thought of trying to update the analyses for a previous study? (let alone imagine someone else trying to replicate your analyses?)

    • Are your data and statistical models becoming more complex, challenging to perform, and challenging to report?

    • Are you concerned about the misuse of statistical findings? But not sure about reporting all results of all analyses?

    • Do you work in teams that span disciplines and institutions?

    We answered ‘yes’ to all these questions and then created bllflow.

    The purpose of bllflow

    bllFlow supports transparent, reproducible data analyses and model development. The goal is to improved science quality with quicker and more efficient data analyses.

    What does bllflow do?

    The focus of bllflow is data cleaning and variable transformation – the most time consuming and tedious analytic task – and analyses reporting.

    bllflow functions and workflow build from other packages including sjmisc, tableone, codebook, and Hmisc.

    There are three main features:

    1. The Model Specification Workbook (MSW) – Start your model development with worksheets (CSV files) that contain information about the variables in your model, data cleaning and transformation steps and how to create output tables.

    2. Functions to perform routine data cleaning and transformation tasks – use functions with or without the Model Specification Workbook. Functions with ‘BLL’ in the function name perform data cleaning and transformation using the Model Specification Workbook.

    3. Formatted output files, tables – results of your analyses in a consistent format following the concept of ‘one document, many uses’.

    At any point of your analyses you have:

    • a log of data cleaning and transformed variables (how your data was cleaned and transformed).
    • a codebook to facilitate data transparency and provenance.

    bllflow supports the use of metadata, including:

    • the Data Document Initiative (DDI).
    • Predictive Model Modelling Language for predictive algorithms (PMML) files for transparent algorithm reporting and deployment.

    bllflow workflow and functions support reporting guidelines such as TRIPOD, STROBE, and RECORD.

    Installation

    # If not installed, install the devtools
    install.packages("devtools")
    
    # then, install the package
    devtools::install_github("Big-Life-Lab/bllFlow")
    

    There are plans to submit bllFlow to CRAN once we include all seven steps of the bllflow workflow. Currently on step #4.

    Contributing to the package

    Please follow this guide if you like to contribute to the bllflow package.

    Visit original content creator repository https://github.com/Big-Life-Lab/bllflow
  • cpp-unexpected-behaviour.github.io

    C++: Unexpected Behaviour

    Abstract

    Do you really think to know C++? Then you probably want to participate in this talk, you will discover the most surprising, weird, strange or really “WTF” language features you could encounter. There are so many obscure corners of the language that seem to go against common programming intuition. The freedom that C++ gives the programmer may be a double-edged sword; while the user can do many things that have been abstracted away in other languages, it’s very easy to shoot ourselves in the foot.

    From unintended private member access to unexpected function definitions, in this talk, we will walk you through the quirks that still exist in the language today, and the motivations behind them.

    Authors

    Antonio Mallia

    Github: @amallia

    Website: http://www.antoniomallia.it

    Jaime Alonso Lorenzo

    Github: @jaimealonso

    Website: https://www.linkedin.com/in/jaimealonsolorenzo/

    Topconf 2017

    City: Duesseldorf

    Date: 4th – 6th Oct

    Website: https://www.topconf.com/conference//duesseldorf-2017/talk/c-unexpected-behaviour/

    Slides: http://cpp-unexpected-behaviour.github.io/topconf2017

    Meeting C++ 2017

    City: Berlin

    Date: 9th – 11th November

    Website: http://meetingcpp.com/2017/talks/items/Cpp__unexpected_behaviour.html

    Slides: http://cpp-unexpected-behaviour.github.io/meetingcpp2017

    Visit original content creator repository
    https://github.com/cpp-unexpected-behaviour/cpp-unexpected-behaviour.github.io

  • RequestMaster

    RequestMaster – Native API Handler

    Overview

    • RequestMaster is a powerful and lightweight Android application designed to manage and execute HTTP API requests without relying on any third-party libraries. Built using modern technologies and best practices, ApiMate offers a robust solution for developers who need to interact with APIs using GET, POST, and File Upload methods.

    Tech Stacks

    • MVVM Architecture: The Arch promotes reusability of code, greatly simplifying the process of creating simple user interfaces
    • Jetpack Compose: Using Modern Tech of Jetpack Compose.
    • httpUrlConnection: Used for Handling all requests.
    • Sqlite: Used to cache responses and requests.

    Api Service

    As I got started, my main concern was figuring out how to make requests without Retrofit. So with simple search showed me that I could use HttpUrlConnection instead. So Now let’s start to figure out how the HTTPURLConnection works:

    1. Create an Url Object from our Input Url :
        val url = URL(inputUrl.toString())
    1. Open Connection: Using the openConnection() method of the URL object,
        httpURLConnection = url.openConnection() as HttpURLConnection
    1. Set Request Method & Request Headers (Optional): Default Request Type is GET

    2. Now we come with the part of sending and receiving data, if we want to send data we get the output stream from the connection and if we want to receive data we get the input stream from the connection.

    3. Now the request is done then we map results to our API response model

    4. Finally we close the connection

        httpurlconnection.disconnect()

    If you Wondered like me at first where actually the connection starts I got you, after a quick search I found that httpurlconnection starts the connection when we try to get the stream for receiving or sending data

    As Simple as that I finished Creating 3 methods for requests one for GET Requests, POST with JSON Body and one for File UPLOAD

    Caching and Database

    As mentioned in the task description we are not allowed to use room so what came to my mind first creating caching with SQLite Database I created 3 classes

    1. Table Object containing all data about database like :(Db name, Columns names, Table Creation and Dropping Query)

    2. Model data class for our database objects and mapping functions to our domain object.

    3. Database Helper Class for executing all Queries and filtering data from the database.

    Domain

    Since We are Done Now with our data layer it turns to create a domain layer which is simple now:

    1. I Created a data class for responses for the domain model which we will use in our app layer
    2. Repository Interface containing all necessary functions

    Viewmodel

    I created view models based on events and states each View model contains a state so as simple as that if any event occurs in the UI like user interaction or whatever we update the state with changes so for each view model, there is one state and one function called onEvent() Responding to All Possible Events coming from the UI

    I have two challenges, in that case, how am gonna run repo functions in the background thread (As we are doing tasks that could take a long time to execute so it can block our UI Thread ) if I can do so, how I am gonna interact from the thread to update our Ui with new state (As we know we cannot interact with UI directly from the background thread)

    The easy solution we can do is to call

    thread(start = true) {
     // your block of code here 
    }

    But that way is not optimal as each time we run a background task we will create a new Thread which is not effective as creating a thread is an overhead and can slow the execution of the app so we need another solution.

    So There are several ways of doing tasks in the background thread without coroutine and without creating a new thread each time (Async task, Work Manager and Executor ), We cannot use the async task as it’s deprecated So I decided to run a block in the background thread with Executors in Java with CachedThreadPool (Creates a thread pool that creates new threads as needed, but will reuse previously constructed threads when they are available.)

    1. So I created an instance of executor and function to submit any block of code in it.
    private val executors = Executors.newCachedThreadPool()
    
    fun runInBackground(block: () -> Unit) {
      executors.submit(block)
    }
    1. As we know each thread has exactly one looper so when We create a handler with Looper.getMainLooper() that means we associate that handler to post any blocks or messages in the Messaging Queue of the UI thread.
    2. For the UI thread I created uiHanlder this allows us to post and process messages on the main thread’s message queue, which is essential for updating the UI from background threads.
    private val uiHandler = Handler(Looper.getMainLooper())
    
    fun runInUiThread(block: () -> Unit) {
      uiHandler.post(block)
    }

    Note: In my case, I am using state and events so even if we didn’t use runInUiThread the code will still work. Why? Cause the background thread in that case doesn’t interact with the UI directly it updates the state and the state updates UI.

    UI Screens

    I built the UI with Jetpack compose I built only 2 screens one for executing requests and one for listing all of the logs.

    For each screen, I created one data class as screen state and one sealed class for holding all possible events in the app.

    Main Screen Logs Screen
    Main Screen Logs
    Visit original content creator repository https://github.com/ahmedelshaikh20/RequestMaster
  • Sankhya

    Sankhya

    Sankhya, a JavaScript utility-library to declaratively transform an Object into another using pure transformation functions.

    This is inspired from Plumbing Clojure library

    Installation

    npm i sankhya-graph-js
    or
    yarn add sankhya-graph-js
    
    

    Example

    Traditional way of defining Transformation:

    function stats(input) {
      const {values} = input
    
      if (!values) throw new Error('No property "values"!')
    
      const count = values.length
      const mean = values.reduce(sum) / count
      const meanSquare = values.map(square).reduce(sum) / count
      const variance = meanSquare - square(mean)
    
      const output = {
        count,
        mean,
        meanSquare,
        variance,
      }
      return output
    }
    
    function sum(a, b) {
      return a + b
    }
    function square(a) {
      return a * a
    }
    
    const data = {values: [1, 2, 3, 4, 5, 6, 7]}
    const transformedObj = stats(data)
    
    console.log(transformedObj)
    // -> Object {count: 7, mean: 4, meanSquare: 20, variance: 4}

    Problems with the above approach:

    Here the drawback of this approach is that each of these transformation function are dependent on the ones above it. Hence we have to make sure that proper dependency order is maintained.

    So if you try to exchange the order of the definition of these functions it will blow up. If you exchange meanSquare and variance order, it won’t work.

    Here we can see that these functions are not independent which becomes a liability

    Hidden Graph which is maintained by this stats

    image

    Using sankhya, we can refactor stats to this very declarative, robust and structured form, using a map from keywords to keyword functions:

    const sankhya = require('sankhya-graph-js')
    
    const square = x => x ** 2
    const sum = (x, y) => x + y
    
    const stats = sankhya({
      meanSquare: (i, o) => i.values.map(square).reduce(sum) / o.count,
      count: (i, o) => i.values.length,
      variance: (i, o) => o.meanSquare - square(o.mean),
      mean: (i, o) => i.values.reduce(sum) / o.count,
    })
    
    const transformedObj = stats({values: [1, 2, 3, 4, 5, 6, 7]})
    console.log(transformedObj)
    // -> Object {count: 7, meanSquare: 20, mean: 4, variance: 4}

    Explanation:

    Every output transformation is pure and expressed with an arrow function, where the i and o parameters represent input (data) and output (the future returned object) respectively.

    We can express output values in function of other output values, sankhya will understand the underlying graph and execute the statements in the right order! Ordering of the definition of these functions can be arbitrary

    Moreover, each micro-function in the object values is executed exacly once, and the result cached for next calls. (You can verify this by logging from inside the arrow functions)

    This is why the given functions need to be pure.

    Validation

    Before performing any transformation sankhya validates input data, and throws if a property on input is non-existant.

    function dataAttributeProxy(data) {
      return new Proxy(data, {
        get: (t, p, r) => {
          if (p in t) {
            return t[p]
          }
          throw new Error(`Data object is missing key ${p}`)
        },
      })
    }

    Laziness

    Sometimes we define a heavy computation function, rich of all the data we might need, but somewhere we need only a subset of these.

    But in the traditional way we can’t achieve this without actually triggering all of the computations

    For these cases we have .lazy.

    For example, suppose we only need the mean of our data:

    const data = {values: [1, 2, 3, 4, 5, 6, 7]}
    const transformed = stats.lazy(data)
    
    // The object is initially empty:
    console.log(transformed)
    // -> Object {}
    
    // We materialize and objectify the computations as they are needed:
    console.log(transformed.mean)
    // -> 4
    
    // The results are then attached to the object,
    // but anything we don't need is not computed:
    
    console.log(transformed)
    // -> Object {count: 7, mean: 4}

    This is made possible by using prototype getters.

    This also guarantees to execute the minimum of necessary steps to get the result.

    Type Definitions –

    export type sankhyaConfig<I, O> = {
      [K in keyof O]: (i: I, o: O) => O[K]
    }
    
    interface sankhyaTransformer<I, O> {
      (input: I): O
      lazy: (input: I) => O
    }
    
    declare function sankhya<I = Record<string, any>, O = Record<string, any>>(
      config: sankhyaConfig<I, O>,
    ): sankhyaTransformer<I, O>
    
    export default sankhya
    Visit original content creator repository https://github.com/ajitjha393/Sankhya
  • r1-io


    NPM version NPM downloads GitHub vk-io version(4.4.0)


    Guide

    You can see a simple project here

    You can see a more advanced project here

    1. Create context of app
    enum Menus {
      Main = 'main',
      Settings = 'settings',
    }
    
    interface User {
      name: string;
      selectedMenu: Menus;
    }
    
    export interface BotContext {
      user: User;
    }

    1. Create actions you will use
    const gotoMenuAction = createParametarizedAction<BotContext, Menus>(
      'goto menu',
      async (menu, {send}, {user}) => {
        user.selectedMenu = menu;
        await send(`Welcome to ${menu}`);
      }
    );
    
    const setRandomUsername = createAction<BotContext>(
      'set random username',
      async ({send}, {user}) => {
        const getRandomInt = (max: number) => Math.floor(Math.random() * max);
        const randomName = ['Fish', 'Sticks', 'Kanye West', 'Toivo', 'SunBoy'];
        user.name = randomName[getRandomInt(4)];
        await send(`Your name is ${user.name}`);
      }
    );

    1. Create menu constructors
    const SettingsMenu: R1IO.FC<BotContext> = async () => (
      <menu>
        <row>
          <button label={`Get random username`} onClick={setRandomUsername()} />
        </row>
        <row>
          <button onClick={gotoMenuAction(Menus.Settings)}>Goto main menu</button>
        </row>
      </menu>
    );
    
    const MainMenu: R1IO.FC<BotContext> = ({user}) => (
      <menu>
        <row>
          <button label={`Hello ${user.name}`} />
        </row>
        <row>
          <button onClick={gotoMenuAction(Menus.Settings)}>
            Goto settings menu
          </button>
        </row>
      </menu>
    );

    1. Create router & your context filler (middleware)
    const user: User = {
      name: 'Dmitriy',
      selectedMenu: Menus.Main,
    };
    
    const router = createRouter<BotContext, Menus>(
      {
        [Menus.Main]: {build: MainMenu},
        [Menus.Settings]: {build: SettingsMenu},
      },
      ({user}) => user.selectedMenu
    );
    
    export const RootMiddleware = createMiddleware(router, async () => ({user}));

    Install

    1. Add package to your project
    yarn add r1-io

    or

    npm i r1-io

    1. Add vk-io to your project (only 4.4.0 tested)
    yarn add vk-io@4.4.0

    or

    npm i vk-io@4.4.0

    1. Add this lines to your tsconfig.json
    {
      "compilerOptions": {
        "jsx": "react",
        "jsxFactory": "R1IO.createElement",
        "jsxFragmentFactory": "R1IO.Fragment"
      }
    }

    Features

    1. React components instead of keyboard builder
    const MainMenu: R1IO.FC<BotContext> = ({user}) => (
      <menu>
        <row>
          <button label={`Hello ${user.name}`} />
        </row>
        <row>
          <button onClick={gotoMenuAction(Menus.Settings)}>
            Goto settings menu
          </button>
        </row>
      </menu>
    );

    1. Async react components
    const MainMenu: R1IO.FC<BotContext> = async ({user}) => {
      await delay(2000);
      <menu>
        <row>
          <button label={`Hello ${user.name}`} />
        </row>
      </menu>;
    };

    1. User based actions

    Visit original content creator repository https://github.com/stercoris/r1-io