对Java8流进行分区

对Java8流进行分区,java,functional-programming,java-8,java-stream,Java,Functional Programming,Java 8,Java Stream,如何在Java8流上实现“分区”操作?通过划分,我的意思是,将一个流划分为给定大小的子流。不知何故,它将与Guava方法相同,只是希望分区是惰性计算的流,而不是列表的流。正如Jon Skeet在他的文章中所显示的那样,不可能使分区惰性。对于非惰性分区,我已经有了以下代码: public static <T> Stream<Stream<T>> partition(Stream<T> source, int size) { final Ite

如何在Java8流上实现“分区”操作?通过划分,我的意思是,将一个流划分为给定大小的子流。不知何故,它将与Guava方法相同,只是希望分区是惰性计算的流,而不是列表的流。

正如Jon Skeet在他的文章中所显示的那样,不可能使分区惰性。对于非惰性分区,我已经有了以下代码:

public static <T> Stream<Stream<T>> partition(Stream<T> source, int size) {
    final Iterator<T> it = source.iterator();
    final Iterator<Stream<T>> partIt = Iterators.transform(Iterators.partition(it, size), List::stream);
    final Iterable<Stream<T>> iterable = () -> partIt;

    return StreamSupport.stream(iterable.spliterator(), false);
}
公共静态流分区(流源,int-size){
final Iterator it=source.Iterator();
final Iterator partIt=Iterators.transform(Iterators.partition(it,size),List::stream);
最终Iterable=()->partIt;
返回StreamSupport.stream(iterable.spliterator(),false);
}

不可能将任意源流划分为固定大小的批,因为这将破坏并行处理。并行处理时,您可能不知道拆分后第一个子任务中有多少个元素,因此在第一个子任务完全处理之前,您无法为下一个子任务创建分区

但是,可以从随机访问
列表创建分区流。例如,在“我的图书馆”中可以使用此功能:

这个实现看起来有点长,但它考虑了一些特殊情况,比如接近最大值的列表大小


如果您希望为无序流提供并行友好的解决方案(因此您不关心哪些流元素将在单个批次中组合),您可以像这样使用收集器(感谢@sibnick的启发):

这种收集器是完全线程安全的,并为顺序流生成有序批

如果要对每个批次应用中间转换,可以使用以下版本:

public static <T, AA, A, B, R> Collector<T, ?, R> unorderedBatches(int batchSize,
        Collector<T, AA, B> batchCollector,
        Collector<B, A, R> downstream) {
    return unorderedBatches(batchSize, 
            Collectors.mapping(list -> list.stream().collect(batchCollector), downstream));
}
公共静态收集器无序批处理(int batchSize,
收集器批收集器,
收集器(下游){
返回无序批次(batchSize,
Collectors.mapping(list->list.stream().collect(batchCollector),下游));
}
例如,通过这种方式,您可以动态地对每个批次中的数字求和:

List<Integer> list = IntStream.range(0,20)
        .boxed().parallel()
        .collect(unorderedBatches(3, Collectors.summingInt(Integer::intValue), 
            Collectors.toList()));
List List=IntStream.range(0,20)
.boxed().parallel()
.collect(无序批处理)(3个收集器.summingit(整数::intValue),
收藏家;

我认为,通过某种黑客手段,这是可能的:

为批处理创建实用程序类:

public static class ConcurrentBatch {
    private AtomicLong id = new AtomicLong();
    private int batchSize;

    public ConcurrentBatch(int batchSize) {
        this.batchSize = batchSize;
    }

    public long next() {
        return (id.getAndIncrement()) / batchSize;
    }

    public int getBatchSize() {
        return batchSize;
    }
}
方法:

public static <T> void applyConcurrentBatchToStream(Consumer<List<T>> batchFunc, Stream<T> stream, int batchSize){
    ConcurrentBatch batch = new ConcurrentBatch(batchSize);
    //hack java map: extends and override computeIfAbsent
    Supplier<ConcurrentMap<Long, List<T>>> mapFactory = () -> new ConcurrentHashMap<Long, List<T>>() {
        @Override
        public List<T> computeIfAbsent(Long key, Function<? super Long, ? extends List<T>> mappingFunction) {
            List<T> rs = super.computeIfAbsent(key, mappingFunction);
            //apply batchFunc to old lists, when new batch list is created
            if(rs.isEmpty()){
                for(Entry<Long, List<T>> e : entrySet()) {
                    List<T> batchList = e.getValue();
                    //todo: need to improve
                    synchronized (batchList) {
                        if (batchList.size() == batch.getBatchSize()){
                            batchFunc.accept(batchList);
                            remove(e.getKey());
                            batchList.clear();
                        }
                    }
                }
            }
            return rs;
        }
    };
    stream.map(s -> new AbstractMap.SimpleEntry<>(batch.next(), s))
            .collect(groupingByConcurrent(AbstractMap.SimpleEntry::getKey, mapFactory, mapping(AbstractMap.SimpleEntry::getValue, toList())))
            .entrySet()
            .stream()
            //map contains only unprocessed lists (size<batchSize)
            .forEach(e -> batchFunc.accept(e.getValue()));
}
public static void applyConcurrentBatchToStream(Consumer batchFunc、Stream Stream、int batchSize){
ConcurrentBatch=新ConcurrentBatch(批次大小);
//hack java映射:扩展和覆盖ComputeFabSent
供应商映射工厂=()->新的ConcurrentHashMap(){
@凌驾

public List computeIfAbsent(长键,函数如果您希望按顺序使用流,则可以对流进行分区(以及执行相关函数,如窗口化,我认为这是您在本例中真正想要的)。 支持标准流分区的两个库是(我是作者)和cyclops react扩展的(添加窗口等功能)

cyclops streams有一组用于在Java流上操作的静态函数,以及一系列用于分区的函数,如splitAt、headAndTail、splitBy和partition

要将流窗口化为大小为30的嵌套流,可以使用window方法

对于OPs而言,在流方面,将流拆分为给定大小的多个流是一个窗口操作(而不是分区操作)

Stream-streamOfStreams=StreamUtils.window(Stream,30);
有一个名为的流扩展类,它扩展并添加了窗口功能,这可能会使代码更干净一些

  ReactiveSeq<Integer> seq;
  ReactiveSeq<ListX<Integer>> streamOfLists = seq.grouped(30);
ReactiveSeq;
ReactiveSeq streamOfLists=按顺序分组(30);
正如Tagir在上面指出的,这不适用于并行流。如果您希望以多线程方式对流进行窗口或批处理,LazyFutureStream in可能会很有用(窗口处理在待办事项列表中,但现在可以使用普通的批处理)

在这种情况下,数据将从执行流的多个线程传递到多生产者/单消费者无等待队列,并且该队列中的顺序数据可以在再次分发到线程之前打开窗口

  Stream<List<Data>> batched = new LazyReact().range(0,1000)
                                              .grouped(30)
                                              .map(this::process);
Stream batched=new LazyReact().范围(01000)
.分组(30)
.map(这个::过程);

以下是您的快速解决方案


免责声明:我是AbacusUtil的开发人员。

我发现的解决此问题的最优雅、最纯粹的java 8解决方案:

public static <T> List<List<T>> partition(final List<T> list, int batchSize) {
return IntStream.range(0, getNumberOfPartitions(list, batchSize))
                .mapToObj(i -> list.subList(i * batchSize, Math.min((i + 1) * batchSize, list.size())))
                .collect(toList());
}

//https://stackoverflow.com/questions/23246983/get-the-next-higher-integer-value-in-java
private static <T> int getNumberOfPartitions(List<T> list, int batchSize) {
    return (list.size() + batchSize- 1) / batchSize;
}
公共静态列表分区(最终列表,int batchSize){
返回IntStream.range(0,getNumberOfPartitions(list,batchSize))
.mapToObj(i->list.subList(i*batchSize,Math.min((i+1)*batchSize,list.size()))
.collect(toList());
}
//https://stackoverflow.com/questions/23246983/get-the-next-higher-integer-value-in-java
私有静态int getNumberOfPartitions(列表,int batchSize){
返回(list.size()+batchSize-1)/batchSize;
}

这是一个纯Java解决方案,它是惰性评估的,而不是使用列表

public static <T> Stream<List<T>> partition(Stream<T> stream, int batchSize){
    List<List<T>> currentBatch = new ArrayList<List<T>>(); //just to make it mutable 
    currentBatch.add(new ArrayList<T>(batchSize));
    return Stream.concat(stream
      .sequential()                   
      .map(new Function<T, List<T>>(){
          public List<T> apply(T t){
              currentBatch.get(0).add(t);
              return currentBatch.get(0).size() == batchSize ? currentBatch.set(0,new ArrayList<>(batchSize)): null;
            }
      }), Stream.generate(()->currentBatch.get(0).isEmpty()?null:currentBatch.get(0))
                .limit(1)
    ).filter(Objects::nonNull);
}
公共静态流分区(流,int-batchSize){
List currentBatch=new ArrayList();//只是为了使其可变
currentBatch.add(新的ArrayList(batchSize));
返回流。concat(流

.map(新函数(){
公开名单适用(T){
currentBatch.get(0)、add(t);
return currentBatch.get(0.size()==batchSize?currentBatch.set(0,新ArrayList(batchSize)):null;
}
}),Stream.generate(()->currentBatch.get(0.isEmpty()?null:currentBatch.get(0))
李先生
public static class ConcurrentBatch {
    private AtomicLong id = new AtomicLong();
    private int batchSize;

    public ConcurrentBatch(int batchSize) {
        this.batchSize = batchSize;
    }

    public long next() {
        return (id.getAndIncrement()) / batchSize;
    }

    public int getBatchSize() {
        return batchSize;
    }
}
public static <T> void applyConcurrentBatchToStream(Consumer<List<T>> batchFunc, Stream<T> stream, int batchSize){
    ConcurrentBatch batch = new ConcurrentBatch(batchSize);
    //hack java map: extends and override computeIfAbsent
    Supplier<ConcurrentMap<Long, List<T>>> mapFactory = () -> new ConcurrentHashMap<Long, List<T>>() {
        @Override
        public List<T> computeIfAbsent(Long key, Function<? super Long, ? extends List<T>> mappingFunction) {
            List<T> rs = super.computeIfAbsent(key, mappingFunction);
            //apply batchFunc to old lists, when new batch list is created
            if(rs.isEmpty()){
                for(Entry<Long, List<T>> e : entrySet()) {
                    List<T> batchList = e.getValue();
                    //todo: need to improve
                    synchronized (batchList) {
                        if (batchList.size() == batch.getBatchSize()){
                            batchFunc.accept(batchList);
                            remove(e.getKey());
                            batchList.clear();
                        }
                    }
                }
            }
            return rs;
        }
    };
    stream.map(s -> new AbstractMap.SimpleEntry<>(batch.next(), s))
            .collect(groupingByConcurrent(AbstractMap.SimpleEntry::getKey, mapFactory, mapping(AbstractMap.SimpleEntry::getValue, toList())))
            .entrySet()
            .stream()
            //map contains only unprocessed lists (size<batchSize)
            .forEach(e -> batchFunc.accept(e.getValue()));
}
  Stream<Streamable<Integer>> streamOfStreams = StreamUtils.window(stream,30);
  ReactiveSeq<Integer> seq;
  ReactiveSeq<ListX<Integer>> streamOfLists = seq.grouped(30);
  Stream<List<Data>> batched = new LazyReact().range(0,1000)
                                              .grouped(30)
                                              .map(this::process);
IntStream.range(0, Integer.MAX_VALUE).split(size).forEach(s -> N.println(s.toArray()));
public static <T> List<List<T>> partition(final List<T> list, int batchSize) {
return IntStream.range(0, getNumberOfPartitions(list, batchSize))
                .mapToObj(i -> list.subList(i * batchSize, Math.min((i + 1) * batchSize, list.size())))
                .collect(toList());
}

//https://stackoverflow.com/questions/23246983/get-the-next-higher-integer-value-in-java
private static <T> int getNumberOfPartitions(List<T> list, int batchSize) {
    return (list.size() + batchSize- 1) / batchSize;
}
public static <T> Stream<List<T>> partition(Stream<T> stream, int batchSize){
    List<List<T>> currentBatch = new ArrayList<List<T>>(); //just to make it mutable 
    currentBatch.add(new ArrayList<T>(batchSize));
    return Stream.concat(stream
      .sequential()                   
      .map(new Function<T, List<T>>(){
          public List<T> apply(T t){
              currentBatch.get(0).add(t);
              return currentBatch.get(0).size() == batchSize ? currentBatch.set(0,new ArrayList<>(batchSize)): null;
            }
      }), Stream.generate(()->currentBatch.get(0).isEmpty()?null:currentBatch.get(0))
                .limit(1)
    ).filter(Objects::nonNull);
}
import java.util.AbstractList;
import java.util.ArrayList;
import java.util.List;

public final class Partition<T> extends AbstractList<List<T>> {

private final List<T> list;
private final int chunkSize;

public Partition(List<T> list, int chunkSize) {
    this.list = new ArrayList<>(list);
    this.chunkSize = chunkSize;
}

public static <T> Partition<T> ofSize(List<T> list, int chunkSize) {
    return new Partition<>(list, chunkSize);
}

@Override
public List<T> get(int index) {
    int start = index * chunkSize;
    int end = Math.min(start + chunkSize, list.size());

    if (start > end) {
        throw new IndexOutOfBoundsException("Index " + index + " is out of the list range <0," + (size() - 1) + ">");
    }

    return new ArrayList<>(list.subList(start, end));
}

@Override
public int size() {
    return (int) Math.ceil((double) list.size() / (double) chunkSize);
}
Partition<String> partition = Partition.ofSize(paCustomerCodes, chunkSize);

for (List<String> strings : partition) {
}