To: J3 J3/19-221 From: Tom Clune Subject: BFLOAT16 variant of REAL16 representation Date: 2019-September-24 Reference: 19-139r1 I. Introduction New generations of processors are now providing half-precision floating point capabilites that often provide significant performance advantages over typical precisions. While such capabilities must be used carefully, various applications have now demonstrated useful results. In particular, machine learning algorithms can often exploit such hardware capabilities in a robust fashion. Paper 19-139r1 has already introduced a new REAL16 kind that partially addresses this application space, but an increasingly populer representation in the machine learning world is BFLOAT16. This representation sacrifices mantissa for an expanded exponent range. II. Use case Enable portable development of applications that can exploit increasingly commonly available BFLOAT16 hardware capabilities. This is particularly relevant to machine learning. III. Rough requirement: Expand iso_fortran_env to include BFLOAT16. If, for any of these constants, the processor supports more than one kind of that size, it is processor dependent which kind value is provided.