使用flink 中遇到的問(wèn)題總結(jié)

問(wèn)題一:如何保證數(shù)據(jù)按照事件時(shí)間準(zhǔn)確的落到同一個(gè)分區(qū);

/**
 * @Author:wenwei
 * @Date : 2020/9/8 22:15
 * 自定義分桶的規(guī)則
 * 1:按照什么格式定義文件名,默認(rèn)為yyyy-MM-dd-HH
 */
@PublicEvolving
public class CustomBucketAssigner<IN> implements BucketAssigner<IN, String> {

    private static final long serialVersionUID = 1L;

    private static final String DEFAULT_FORMAT_STRING = "yyyy-MM-dd--HH";

    private final   String formatString;

    private final ZoneId zoneId;

    private transient DateTimeFormatter dateTimeFormatter;

    /**
     * Creates a new {@code DateTimeBucketAssigner} with format string {@code "yyyy-MM-dd--HH"}.
     */
    public CustomBucketAssigner() {
        this(DEFAULT_FORMAT_STRING);
    }

    /**
     * Creates a new {@code DateTimeBucketAssigner} with the given date/time format string.
     *
     * @param formatString The format string that will be given to {@code SimpleDateFormat} to determine
     *                     the bucket id.
     */
    public CustomBucketAssigner(String formatString) {
        this(formatString, ZoneId.systemDefault());
    }

    /**
     * Creates a new {@code DateTimeBucketAssigner} with format string {@code "yyyy-MM-dd--HH"} using the given timezone.
     *
     * @param zoneId The timezone used to format {@code DateTimeFormatter} for bucket id.
     */
    public CustomBucketAssigner(ZoneId zoneId) {
        this(DEFAULT_FORMAT_STRING, zoneId);
    }

    /**
     * Creates a new {@code DateTimeBucketAssigner} with the given date/time format string using the given timezone.
     *
     * @param formatString The format string that will be given to {@code DateTimeFormatter} to determine
     *                     the bucket path.
     * @param zoneId The timezone used to format {@code DateTimeFormatter} for bucket id.
     */
    public CustomBucketAssigner(String formatString, ZoneId zoneId) {
        this.formatString = Preconditions.checkNotNull(formatString);
        this.zoneId = Preconditions.checkNotNull(zoneId);
    }
//將分桶的規(guī)則寫(xiě)成按照事件時(shí)間;
    @Override
    public String getBucketId(IN element, BucketAssigner.Context context) {
        if (dateTimeFormatter == null) {
            dateTimeFormatter = DateTimeFormatter.ofPattern(formatString).withZone(zoneId);
        }
        //固定格式命名文件夾名稱
        return "p_data_day="+dateTimeFormatter.format(Instant.ofEpochMilli(context.currentWatermark()));
    }

    @Override
    public SimpleVersionedSerializer<String> getSerializer() {
        return SimpleVersionedStringSerializer.INSTANCE;
    }

    @Override
    public String toString() {
        return "DateTimeBucketAssigner{" +
                "formatString='" + formatString + '\'' +
                ", zoneId=" + zoneId +
                '}';
    }

}

問(wèn)題二: flink 如何準(zhǔn)確的劃分窗口的?

如何正確定義window的窗口時(shí)間,保證數(shù)據(jù)都會(huì)準(zhǔn)確的按照事件分區(qū),不會(huì)將前一天的數(shù)據(jù),落入到下一個(gè)時(shí)間分區(qū)里面;可以參考windows 中的源碼,其中定義start時(shí)間,值得參考

/**
     * Method to get the window start for a timestamp.
     *
     * @param timestamp epoch millisecond to get the window start. 事件發(fā)生的時(shí)間 
     * @param offset The offset which window start would be shifted by.  定義TumblingEventTimeWindows 設(shè)置云訊的offset的值,默認(rèn)都為零
     * @param windowSize The size of the generated windows.  窗口大小
     * @return window start
     對(duì)應(yīng)的數(shù)據(jù)應(yīng)windows
     
    例如 windows Size = 5s  ,offset = 0 ; 例如當(dāng)前的 timestamp = 2s ; 7s 
    2 - (2-0+5) % 5 = 0 ,
    7 - (7 - 0 + 5) % 5 = 5 , 
    例如 windows Size = 7s  ,offset = 0 ; 例如當(dāng)前的 timestamp = 2s ; 7s 
    2 - (2-0+7) % 7 = 0;
    7 - (7-0+7)%7= 7
    
    例如 windows Size = 5s  ,offset = 1s ; 例如當(dāng)前的 timestamp = 2s ; 7s 
    2 - (2-1+5) % 5 = 1 ,
    7 - (7 - 0 + 5) % 5 = 6 , 
    例如 windows Size = 7s  ,offset = 0 ; 例如當(dāng)前的 timestamp = 2s ; 7s 
    2 - (2-1+7) % 7 = 1;
    7 - (7-1+7)%7= 8
    
     */
    public static long getWindowStartWithOffset(long timestamp, long offset, long windowSize) {
        return timestamp - (timestamp - offset + windowSize) % windowSize;
    }

問(wèn)題三 : 由于數(shù)據(jù)量不斷增大,解析IP地址的時(shí)候,導(dǎo)致文件句柄過(guò)多;

  • 將解析ip的類改造成單例類,有待優(yōu)化

public class Ip2regionSingleton {

    private static Logger logger = LoggerFactory.getLogger(Ip2regionSingleton.class);

    private static Ip2regionSingleton instance = new Ip2regionSingleton();

    private static DbConfig config;
    private static DbSearcher searcher;


    public DbSearcher getSearcher() {
        return searcher;
    }


    // 私有化構(gòu)造方法
    private Ip2regionSingleton() {

        String path = Ip2regionSingleton.class.getResource("/").getPath();
        String dbPath =  path + "plugins/ip2region.db";
        File file = new File(dbPath);

        logger.info("singleton count:{}","-------------------------------------------------------");

        if ( file.exists()  ) {

            try{
                config = new DbConfig();
                searcher = new DbSearcher(config, dbPath);

            }catch (Exception e){
                logger.error("Ip2regionSingleton:{}",e.getMessage());
                e.printStackTrace();
            }
        }
    }

    public static Ip2regionSingleton getInstance() {
        return instance;
    }

}

問(wèn)題四: 如何解決flink pom文件中 ,包依賴的問(wèn)題;

  • maven helper ;找到相應(yīng)沖突的jar類;
  • 通過(guò)exclude方式,去除掉沖突的jar類

問(wèn)題五: 如何保證flink中的,端到端數(shù)據(jù)的一致性,順序性;

  • 保證kafka中數(shù)據(jù)的順序性;(做到全局的數(shù)據(jù)的順序性基本上不可能,但是可以做到單分區(qū)的數(shù)據(jù)一致性)

  • kaka中,每個(gè)機(jī)器有個(gè)broker,broker里面有多個(gè)partition,partition之間通過(guò)主從方式復(fù)制;這樣保證數(shù)據(jù)的一致性;

  • flink中設(shè)置 exactly oncely的語(yǔ)義;

   env.enableCheckpointing(parameter.getLong("checkpoint.cycle",300*1000L),CheckpointingMode.EXACTLY_ONCE);

問(wèn)題六: 如何保證在無(wú)事件數(shù)據(jù)更新的時(shí)候,更新watermark的值,然后觸發(fā)窗口的計(jì)算

  • 在處理某些數(shù)據(jù)的時(shí)候,數(shù)據(jù)流的時(shí)間更新時(shí)間間隔大于窗口的大小,如果使用PunctuatedWatermarks 會(huì)導(dǎo)致watermark一直不更新;改成AssignerWithPeriodicWatermarks周期性的更新的watermark即可

    private static class CustomWatermarks<T> implements AssignerWithPunctuatedWatermarks<PageActivityDO> {
            private static final long serialVersionUID = 1L;
            private Long currentTime = 0L;
            //允許2分鐘的延遲
            private Long allowDelayTime = 120L;
            @Override
            public Watermark checkAndGetNextWatermark(PageActivityDO topic, long l) {
                return new Watermark(currentTime - allowDelayTime);
            }
            @Override
            public long extractTimestamp(PageActivityDO topic, long l) {
                DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss");
    
                if(StringUtils.isNullOrWhitespaceOnly(topic.getPoint_time())){
                    return currentTime;
                }
                LocalDateTime localDateTime = LocalDateTime.parse(topic.getPoint_time(), formatter);
                currentTime = Math.max(localDateTime.toInstant(ZoneOffset.of("+8")).toEpochMilli(), currentTime);
    
                return currentTime;
            }
    
    
        }
    
  private static class CustomWatermarksPeriodc<T> implements AssignerWithPeriodicWatermarks<ActivityInfoDO> {
        private static final long serialVersionUID = 1L;
        //允許30s的延遲
        private Long allowDelayTime = 30000L;

        @Override
        public long extractTimestamp(ActivityInfoDO topic, long l) {
            DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss");

            if(StringUtils.isNullOrWhitespaceOnly(topic.getPush_time())){
                return System.currentTimeMillis();
            }
            LocalDateTime localDateTime = LocalDateTime.parse(topic.getPush_time(), formatter);
            logger.info("extractTimestamp,currentWatermark:{}",localDateTime );
            return localDateTime.toInstant(ZoneOffset.of("+8")).toEpochMilli();


        }


        @Nullable
        @Override
        public Watermark getCurrentWatermark() {
            logger.info("getCurrentWatermark, currentWatermark:{}",System.currentTimeMillis() - allowDelayTime);
            return new Watermark(System.currentTimeMillis() - allowDelayTime);
        }
    }
  • 特別注意 選用 Periodic WatermarkGenerator 需要設(shè)置自動(dòng)watermark更新機(jī)制, setAutoWatermarkInterval(1000)

問(wèn)題七:如何保證兩階段提交的實(shí)現(xiàn),保證數(shù)據(jù)能夠冪等性寫(xiě)入和事務(wù)性的寫(xiě)入

  • 保證數(shù)據(jù)源數(shù)據(jù)可重放
  • 數(shù)據(jù)sink支持事務(wù)處理(預(yù)提交,回滾,提交)
  • 或者通過(guò)sink的地方,支持唯一性去重

問(wèn)題八:sink to mysql 的時(shí)候,經(jīng)常報(bào)錯(cuò)

  • 報(bào)錯(cuò)類型 : The last packet successfully received from the server was 1,203,500 milliseconds ago.
  • 有可能是jdbc版本出現(xiàn),同時(shí)最好采用mysql 連接池

正確的使用valueState

flink 對(duì)于不是大規(guī)模的中間態(tài)的管理,可以選用 fsStateBackend ;StateBackend fsStateBackend = new FsStateBackend(parameter.get("flink.state.path"));

  • 其中包括狀態(tài)的保留時(shí)間;更新類型;是否可見(jiàn)
 StateTtlConfig   ttlConfig = StateTtlConfig
                .newBuilder(Time.days(ttlDays))
                .setUpdateType(StateTtlConfig.UpdateType.OnReadAndWrite)
                .setStateVisibility(StateTtlConfig.StateVisibility.NeverReturnExpired)
//                .cleanupInRocksdbCompactFilter(1000L)
                .build();

參考鏈接:

1: Flink最難知識(shí)點(diǎn)再解析 | 時(shí)間/窗口/水印/遲到數(shù)據(jù)處理

2:Flink 中 timeWindow 滾動(dòng)窗口邊界和數(shù)據(jù)延遲問(wèn)題調(diào)研

3: Kafka 概述:深入理解架構(gòu)

4: generate watermarks

5:兩階段提交(2PC)與其在Flink exactly once中的應(yīng)用

提醒,使用的是flink 1.9的版本

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書(shū)系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

友情鏈接更多精彩內(nèi)容