ync/doc/async.filterSeries.html" rel="nofollow">filterSeries
  • filterLimit
  • select -> filter
  • selectSeries -> filterSeries
  • selectLimit -> filterLimit
  • reject
  • rejectSeries
  • rejectLimit
  • detect
  • detectSeries
  • detectLimit
  • find -> detect
  • findSeries -> detectSeries
  • findLimit -> detectLimit
  • pick *
  • pickSeries *
  • pickLimit *
  • omit *
  • omitSeries *
  • omitLimit *
  • reduce
  • inject -> reduce
  • foldl -> reduce
  • reduceRight
  • foldr -> reduceRight
  • transform
  • transformSeries *
  • transformLimit *
  • sortBy
  • sortBySeries *
  • sortByLimit *
  • some
  • someSeries
  • someLimit
  • any -> some
  • anySeries -> someSeries
  • anyLimit -> someLimit
  • every
  • everySeries
  • everyLimit
  • all -> every
  • allSeries -> every
  • allLimit -> every
  • concat
  • concatSeries
  • concatLimit *
  • Control Flow

    Utils

    Mode

    Benchmark

    Benchmark: Async vs Neo-Async

    How to check

    $ git clone git@github.com:suguru03/async-benchmark.git
    $ cd async-benchmark
    $ npm install
    $ node . // It might take more than one hour...
    

    Environment

    Result

    Neo-Async is 1.27 ~ 10.7 times faster than Async.

    The value is the ratio (Neo-Async/Async) of the average speed.

    Collections

    function benchmark func-comparator
    each 3.71 2.54
    eachSeries 2.14 1.90
    eachLimit 2.14 1.88
    eachOf 3.30 2.50
    eachOfSeries 1.97 1.83
    eachOfLimit 2.02 1.80
    map 4.20 4.11
    mapSeries 2.40 3.65
    mapLimit 2.64 2.66
    mapValues 5.71 5.32
    mapValuesSeries 3.82 3.23
    mapValuesLimit 3.10 2.38
    filter 8.11 8.76
    filterSeries 5.79 4.86
    filterLimit 4.00 3.32
    reject 9.47 9.52
    rejectSeries 7.39 4.64
    rejectLimit 4.54 3.49
    detect 6.67 6.37
    detectSeries 3.54 3.73
    detectLimit 2.38 2.62
    reduce 4.13 3.23
    reduceRight 4.23 3.24
    transform 5.30 5.17
    sortBy 2.24 2.37
    some 6.39 6.10
    someSeries 5.37 4.66
    someLimit 3.39 2.84
    every 6.85 6.27
    everySeries 4.53 3.90
    everyLimit 3.36 2.75
    concat 9.18 9.35
    concatSeries 7.49 6.09

    Control Flow

    funciton benchmark func-comparator
    parallel 7.54 5.45
    series 3.29 2.41
    waterfall 5.12 4.27
    whilst 1.96 1.95
    doWhilst 2.07 1.96
    until 2.10 1.99
    doUntil 1.98 2.04
    during 10.7 7.09
    doDuring 5.98 6.03
    queue 1.83 1.75
    priorityQueue 1.79 1.75
    times 3.84 3.65
    race 1.45 1.27
    auto 3.23 3.50
    retry 9.43 6.78
    kodo - Gogs: Go Git Service

    Geen omschrijving

    log_views.py 1.1KB

      # -*- coding: utf-8 -*- from django.conf import settings from django_logit import logit from django_response import response from logs.models import MchSearchModelAndCameraLogInfo @logit def collect_camera_adaptive_log(request): brand_id = request.POST.get('brand_id', settings.KODO_DEFAULT_BRAND_ID) user_id = request.POST.get('user_id', '') is_search_model = int(request.POST.get('is_search_model', 0)) is_search_camera = int(request.POST.get('is_search_camera', 0)) is_selected_model = int(request.POST.get('is_selected_model', 0)) is_search_model_camera = int(request.POST.get('is_search_model_camera', 0)) is_search_camera_after_model = int(request.POST.get('is_search_camera_after_model', 0)) MchSearchModelAndCameraLogInfo.objects.create( user_id=user_id, is_search_model=is_search_model, is_search_camera=is_search_camera, is_selected_model=is_selected_model, is_search_model_camera=is_search_model_camera, is_search_camera_after_model=is_search_camera_after_model, ) return response(200, 'Collect Camera Adaptive Log Success', u'收集型号适配日志成功')