Javascript rxjs将延迟结果订阅到空流

Javascript rxjs将延迟结果订阅到空流,javascript,rxjs,event-loop,Javascript,Rxjs,Event Loop,我有下面的代码。实际上,由于注释掉了几行,它的工作原理与预期的一样。我订阅一个流,进行一些处理,然后将数据流到客户端。但是,如果我取消注释注释,我的流始终为空,即getEntryQueryStream中的count始终为0。我怀疑这与我延迟订阅流,从而错过所有值有关 // a wrapper of the mongodb driver => returns rxjs streams import * as imongo from 'imongo'; import * as Rx from

我有下面的代码。实际上,由于注释掉了几行,它的工作原理与预期的一样。我订阅一个流,进行一些处理,然后将数据流到客户端。但是,如果我取消注释注释,我的流始终为空,即
getEntryQueryStream
中的
count
始终为0。我怀疑这与我延迟订阅流,从而错过所有值有关

// a wrapper of the mongodb driver => returns rxjs streams
import * as imongo from 'imongo';
import * as Rx from 'rx';
import * as _ from 'lodash';
import {elasticClient} from '../helpers/elasticClient';

const {ObjectId} = imongo;

function searchElastic({query, sort}, limit) {
    const body = {
        size: 1,
        query,
        _source: { excludes: ['logbookType', 'editable', 'availabilityTag'] },
        sort
    };
    // keep the search results "scrollable" for 30 secs
    const scroll = '30s';
    let count = 0;

    return Rx.Observable
        .fromPromise(elasticClient.search({ index: 'data', body, scroll }))
        .concatMap(({_scroll_id, hits: {hits}}) => {
            const subject = new Rx.Subject();

            // subject needs to be subscribed to before adding new values
            // and therefore completing the stream => execute in next tick
            setImmediate(() => {
                if(hits.length) {
                    // initial data
                    subject.onNext(hits[0]._source);
                    // code that breaks
                    //if(limit && ++count === limit) {
                        //subject.onCompleted();
                        //return;
                    //}

                    const handleDoc = (err, res) => {
                        if(err) {
                            subject.onError(err);
                            return;
                        }

                        const {_scroll_id, hits: {hits}} = res;

                        if(!hits.length) {
                            subject.onCompleted();
                        } else {
                            subject.onNext(hits[0]._source);
                            // code that breaks
                            //if(limit && ++count === limit) {
                                //subject.onCompleted();
                                //return;
                            //}

                            setImmediate(() =>
                                elasticClient.scroll({scroll, scrollId: _scroll_id},
                                    handleDoc));
                        }
                    };

                    setImmediate(() =>
                        elasticClient.scroll({scroll, scrollId: _scroll_id},
                            handleDoc));
                } else {
                    subject.onCompleted();
                }
            });

            return subject.asObservable();
        });
}

function getElasticQuery(searchString, filter) {
    const query = _.cloneDeep(filter);
    query.query.filtered.filter.bool.must.push({
        query: {
            query_string: {
                query: searchString
            }
        }
    });

    return _.extend({}, query);
}

function fetchAncestors(ancestorIds, ancestors, format) {
    return imongo.find('session', 'sparse_data', {
        query: { _id: { $in: ancestorIds.map(x => ObjectId(x)) } },
        fields: { name: 1, type: 1 }
    })
    .map(entry => {
        entry.id = entry._id.toString();
        delete entry._id;

        return entry;
    })
    // we don't care about the results
    // but have to wait for stream to finish
    .defaultIfEmpty()
    .last();
}

function getEntryQueryStream(entriesQuery, query, limit) {
    const {parentSearchFilter, filter, format} = query;

    return searchElastic(entriesQuery, limit)
        .concatMap(entry => {
            const ancestors = entry.ancestors || [];

            // if no parents => doesn't match
            if(!ancestors.length) {
                return Rx.Observable.empty();
            }

            const parentsQuery = getElasticQuery(parentSearchFilter, filter);
            parentsQuery.query.filtered.filter.bool.must.push({
                terms: {
                    id: ancestors
                }
            });

            // fetch parent entries
            return searchElastic(parentsQuery)
                .count()
                .concatMap(count => {
                    // no parents match query
                    if(!count) {
                        return Rx.Observable.empty();
                    }

                    // fetch all other ancestors that weren't part of the query results
                    // and are still a string (id)
                    const restAncestorsToFetch = ancestors.filter(x => _.isString(x));
                    return fetchAncestors(restAncestorsToFetch, ancestors, format)
                        .concatMap(() => Rx.Observable.just(entry));
                });
        });
}

function executeQuery(query, res) {
    try {
        const stream = getEntryQueryStream(query);
        // stream is passed on to another function here where we subscribe to it like:
        // stream
        //     .map(x => whatever(x))
        //     .subscribe(
        //         x => res.write(x),
        //         err => console.error(err),
        //         () => res.end());
    } catch(e) {
        logger.error(e);
        res.status(500).json(e);
    }
}

我不明白为什么这几行代码会破坏一切,或者我如何修复它。

您的用例非常复杂,您可以从构建searchElastic方法开始,如下面的模式

  • 首先将elasticClient.scroll转换为可观察的
  • 设置elasticClient..search()的初始数据
  • 搜索解决后,您应该获得您的scrollid
  • expand()运算符用于递归执行elasticClientScroll observable
  • 使用映射选择要返回的数据
  • 花些时间来决定何时完成此流
  • 正确的结果是,一旦执行searchElastic()。subscribe()操作,流将持续发出,直到没有更多数据可获取为止

    希望这个结构是正确的,可以让你开始

      function searchElastic({ query, sort }, limit) {
      const elasticClientScroll = Observable.fromCallback(elasticClient.scroll)
      let obj = {
        body: {
          size: 1,
          query,
          _source: { excludes: ['logbookType', 'editable', 'availabilityTag'] },
          sort
        },
        scroll: '30s'
      }
    
      return Observable.fromPromise(elasticClient.search({ index: 'data', obj.body, obj.scroll }))
        .expand(({ _scroll_id, hits: { hits } }) => {
          // guess there are more logic here ..... 
          // to update the scroll id or something
          return elasticClientScroll({ scroll: obj.scroll, scrollId: _scroll_id }).map(()=>
       //.. select the res you want to return  
        )
        }).takeWhile(res => res.hits.length)
    }
    

    你说得对,我想你打电话之前需要先订阅onNext@FanCheung,你知道我如何使用我发布的内容来实现这一点吗?首先,你不需要在这里设置主题,需要重构searchElastic()并将elasticClient.scroll转换为可观察的。然后,您可以将重点放在链接和组合观测值上。merge、combineLatest和forkJoin使它们一起工作。以前代码工作的原因是您使用setImmediate()来欺骗执行顺序。我怀疑它为什么现在不工作是因为流已经完成了。但从根本上说,这不是反应式函数式编程的正确设计模式。@Fanchueng,你愿意用一段代码来回答我在没有主题的情况下如何实现这一点吗?谢谢问题是我首先要做一个
    elasticClient。搜索第一个数据,然后连续执行
    elasticClient。滚动
    s以获取下一个数据。但是每个
    滚动
    都需要一个
    滚动ID
    ,该ID由
    搜索
    滚动
    返回,因此每个调用都取决于上一个调用更新了答案,请阅读api文档中的about expand()操作符,看看是否能够到达那里。expand(value=>innerObservable.map(res=>res))注意expand中innerObservable返回的内容,因为它是递归的,将返回到expand,直到takeWhile(res)停止流
    rxjs
    部分按预期工作!谢谢(虽然我在elasticsearch方面遇到了问题,但我会问一个新问题)