-
Notifications
You must be signed in to change notification settings - Fork 145
New or Changed Features
This page provides a quick overview of added or changed features and when they have been introduced.
Patches will be downloadable from the "Winter 2015" release. When severe bugs are fixed, the previous patch will usually be removed.
-
Module
Data.HashMap
provides a well documented hash map implementation based on hash mapped array tries. -
Module
frege.prelude.Floating
is gone. TheFloating
class moved tofrege.prelude.Math
and supports all functions fromjava.lang.Math
which itself is replicated infrege.java.lang.Math
, but note that the functions there are overloaded and thus unpleasant to use. Therefore, if you need to do number crunching with floating point numbers, just importPrelude.Math
and you're ready to go. -
Fixed Issue#192. It is now an error to write a type like
Mutable s BufferedReader
whenBufferedReader
is declaredmutable native
. -
A new definition syntax makes it possible to let the class generated for the module extend some other class and implement some instances. In addition, arbitrary java code can be supplied verbatim. See also here.
-
Fixed Issue#125, where a mutable native type could be unified with a non mutable native one.
-
Fixed Issue#126, where instance definitions couldn't handle aliases in the where clause of type definitions.
-
The
Byte
and 'Short' types are now fully supported, with instances forShow
,Eq
,Ord
,Enum
,Bounded
,Num
andIntegral
. All the instances treat them as unsigned quantities, much to the grieve of hardcore Java fans. However, there are functionsByte.signed
andShort.signed
that get the signed value. (You would not believe how many Java programmers sincerely believe that bytes and short integers are signed, rather than that they are interpreted as signed. This is because they don't have a clue how 2s-complement representation works, and in discussions drivel something about allegedly existing "sign bits" ...) -
There is support for JSON in module
Data.JSON
. It is possible to parse JSON strings to an internal format, convert such JSON values to Frege values and back, and showing JSON values, which again results in valid JSON text.
- Fixed Issue#102, where the compiler didn't check the types of constructor fields rigidly enough.
-
Several issues fixed
-
New functionality for the Eclipse Plugin: "annotate definition" content proposal, documentation of fixity and precedence of operators in scope.
-
Generalisation of the underscore construct
_.foo
. For example, when we writes.startsWith "foo"
we assume that this means
`String.startsWith s "foo"`
and this now also holds for
`_.startsWith "foo"`
which means
`\it -> it.startsWith "foo"`
and not, as it used to be
`(\it -> it.startsWith) "foo"`
which is the same as
`"foo".startsWith`
and thus shifted the argument unexpectedly to the front.
- fixed handling of infix expressions and implemented fixity resolution according to the Haskell 2010 Report, as outlined in section 10.6. In short: When resolving an expression
a ⊗ b ⊠ c
, then either the operators must have different precedences or they must have the same associativity. In addition, it must not be two non-associative operators of the same precedence, hencea == b == c
is invalid.
The lions share of the changes concern modularization and purification of the compiler and are not immediatly visible to the user. The compiler is now 100% pure, using a StateT
/IO
monad transformer.
- Lexical recognition of operators has been overhauled, and is now more Haskellish. A character for which the predicates
Char.isWhitespace
andChar.isLetter
are bothfalse
and that is not an ASCII digit and not one of ``´'",;{}is taken as an operator character. A non-empty sequence of operator characters, then, is either an operator name, or one of the reserved symbols
<-`, `->`, `=>`, `::` or `..`. Infix expressions are reordered only in later passes, when the associativity and precedence of the functions that have an operator name is known.
The adavantage is that source files can be parsed without knowing the precdence and associativity of imported operators. Operationally this means that the lexical analysis pass does not need to load classes.
Also, it is not necessary anymore to write operators in accent marks in infix
definitions.
Note that operators bound in let
or where
blocks cannot be assigned individual precedences and associativity, that is, infix
definitions work only for non-local items. Local operators are therefore doomed to be non-associative with highest precedence.
- Unicode support for arrows (← U+2190 LEFTWARDS ARROW, → U+2192 RIGHTWARDS ARROW, ⇒ U+21D2 RIGHTWARDS DOUBLE ARROW), double colon (∷ U+2237 PROPORTION), the arithmetic sequence operator (… U+2026 HORIZONTAL ELLIPSIS) and forall (∀ U+2200 FOR ALL). If those characters appear on their own and are not part of an operator name, they are taken as the special symbols
<-
,->
,=>
,::
,..
or the keywordforall
, respectivly.
Unless forbidden by the user through the -ascii
flag, compiler, IDE and tools will use that notation to present types. Type variable names can be generated in greek, fraktur or latin letters.
-
Regular expressions must be written in
´acute accent marks´
(U+00B4 ACUTE ACCENT), the older form#in number signs#
is not supported anymore. In addition, and in order to make things easier on keyboards that don't have acute accent marks, one can write regular expressions in'apostrophes'
as long as this couldn't be interpreted as a character literal. In other words, a string in apostrophes that is not a valid character literal will be interpreted as regular expresion literal. Note that a regular expression that looks for a single character X can be written as(?:X)
instead of justX
, so it should always be possible to disambiguate between character and regular expression literals. -
'\uxxxx'
is a valid character literal if the x are hexadecimal digits. -
Complete overhaul of the
-make
compiler option. See the compiler manpage. -
Expression syntax aligned with Haskell.
if
,case
,let
and lambda expressions now have a higher precedence than infix expressions.do { ... }
remains a term, though, as this is even better than Haskell and should not hurt during porting. -
Data.Bits ported from Haskell
-
String.format
andPrintWriter.printf
with up to 9 arguments to format (not type safe, though) -
support for
java.lang.ProcessBuilder
and related types through modulefrege.java.lang.Processes
-
Java package
java.util
fully frege-ized in modulefrege.java.Util
-
a [Char] is now printed as string, like in Haskell.
-
forkIO
andforkOS
are different now. The former submits a task in a fixed size thread pool executor which is provided by the runtime. The latter, as before, creates an OS thread. -
In IDE mode, compiler will replace unresolved variables with
undefined
, and will continue working, up to and including type check. This makes it probable that unrelated code (and in many cases also code near the error, asundefined
shouldn't introduce type errors) can still get typechecked, and IDE features like content assist will be able to do better work. -
Syntactic sugar:
_.foobar
desugars to(\it -> it.fooBar)
. This will typecheck if and only if: the argument type of the lambda is not a type variable and the type of the argument has afoobar
function orfoobar
is a class operation. A successful type check will rewrite the lambda like(\it -> T.fooBar it)
, whereT
is the type constructor of the type ofit
. Later, a simplifying compiler pass will un-eta-expand the lambda, and the result is justT.fooBar
. This way, we can TDNR a function even if we have no actual syntactic argument it is applied to! This comes in handy in point free or monadic context:letters = packed . filter _.isLetter . unpacked foolines sss = openReader sss >>= _.getLines >>= return . any (flip _.startsWith "foo")
The first example filters the letters of a string. The second one tells if a named file contains lines that start with "foo
". Apart form saving a few keystrokes, the _.foobar
notation is immune against changes of the type name. Also, you do not even have to be able to name the type taht contains foobar
.
- Access to characters in strings doesn't work anymore with syntax
str.[index]
. Use charAt. The syntactic sugar is only valid for arrays.
-
Quickcheck 2.6 ported from Haskell
-
Quick, a tool to run Quickcheck properties of a module, or several modules. (If you try this on the fregec.jar, there will be 2 failing tests. No worries!)
-
better documentation, especially for overloaded native functions
-
native array support via JArray
-
better Unicode support in the scanner, who used to ignore surrogate pairs. Also, everything that is not explicitly uppercase is treated as lowercase, as far as Frege is concerned. This means, you can now write variable names in scripts that do not have case (like Devanagari). Caveat: the scanner still does not recognize glyphs, that is, if I have a german keyboard, I can enter
ä
and it will beU+00E4 LATIN SMALL LETTER A WITH DIAERESIS
, which is a letter and hence acceptable in identifiers, but if one enters the canonical decompositionä
U+0061 LATIN SMALL LETTER A + U+0308 COMBINING DIAERESIS
it won't be recognized as identifier part, becasue the scanner still operates on single unicode points, and the diaresis is not a letter, but a non-spacing mark. Arguably, improvements are possible here. -
Regex literals also are more Unicode-ish, they are being compiled with UNICODE_CASE; CANON_EQ and UNICODE_CHARACTER_CLASS flags.
-
fixed a bunch of issues
-
made the compiled code faster (and, unfortunately, at the same time bigger)
-
added a head-strict cons operator
(!:)
. This means that inx+3 !: ys
, for example,x+3
is evaluated right away, whileys
is still lazy. List comprehension uses this, which often results in better code with less laziness overhead. If one needs to allow for undefined elements in the list, one must use map or the list monad. -
the type unsafe special native return type
[a]
is no longer recognized. Use JArray to tell Frege that a Java method returns an array. -
cleaned up the handling of let bound functions. Until now, local functions that did not reference any local name from their outer scope were silently moved to the top level. This often resulted in more general types, though this is seldom necessary. From now on, if you want your local function to have a polymorphic type, you need to annotate it or write it as top level function right away. Unannotated let bound functions are thus never generalised by the type checker, which brings more consistency and often better code. Still, if the function doesn't use local variables from the enclosing scope, it might get moved to the top level so as to avoid inner classes. This is now also documented in the language reference and the "Differences to Haskell".
-
main
can now have typeIO a
or[String] -> IO a
for some typea
. Ifa
is not (), you need to annotate it, though. If the fregemain
function returnsIO Int
orIO Bool
and if it is run through the suppliedstatic void main(String args)
java method (i.e. as command line application), the returned value will be used to determine the exit status, where0
ortrue
signal success. If main is not annotated and doesn't look like a function, it is assumed to beIO ()
. Otherwise, if it has arguments,[String] -> IO ()
is assumed, as before. -
implemented several convenience functions of Haskell heritage:
on
,interact
,readFile
,writeFile
,appendFile
,getContents
and, as substitute for Haskell'shGetContent
we havegetContentsOf
which operates on a Reader. -
improved type directed name resolution (TDNR). This means, that expressions of the form
x.m
wherex
is an expression andm
is an identifier, will always (I hope!) be typeable, whenever the type ofx
can be determined at all. The non-globalised lets contribute here because there is more code where the type ofx
may get disclosed, but the algorithm I used up to now also had a flaw that prevented proper typing in some cases. The better TDNR also applies to the other syntactic forms that trigger it:x.{m?}
x.{m=v}
andx.{m<-f}
- Liberalized type synonyms
We can now write:
type Discard a = forall b. Show b => a -> b -> (a, String)
- flexibly typed numeric literals
One can now write, for example
recip x = 1/x
and it will be typed Double -> Double
. See examples/NumericLiterals.fr
for rules and examples.
-
Quick check command line tool to check properties of already compiled modules, try
java -cp fregec.jar frege.tools.Quick
-
bug fixes
-
slight performance enhancements
local values are not inlined when marked strict inlining of the remaining ones works now better annotated local values get globalized, when they are constants
- Mutable native data/impure native functions:
As of version 3.21.107, older code dealing with mutable native data will probably cease to work. (Yet, the changes needed to get it working again are usually small, and should be confined to native declarations.)
The new design has been applied to all library stuff. For explanations, please see the latest language reference, chapter 8.2.
- Operators can now be qualified like everything else
It used to be so that qualified operators lost their syntactic magic. This has been fixed.
- Standard I/O
There are now 3 predefined input/output channels available in each program: stdin
, stdout
and stderr
These are really BufferedReader (for stdin) and PrintWriters (for stdout and stderr) wrapped around the streams in java.lang.System.in, .out and .err.
The standard channels provide UTF-8 decoding/encoding, no matter what Java thinks the standard encoding should be. The print writers have the autoflush option set.
The following functions are available that operate on stdout
: print, println, putChar, putStr, putStrLn.
The following functions are availabe that operate on stdin
: getChar, getLine
There are two functions for dealing with files:
openReader :: String -> IO BufferedReader
openWriter :: String -> IO PrintWriter
Buffered readers support: read, readLine, getChar, getLine, getLines
The read
functions return special values like -1 or Nothing on end of file, while the getChar and getLine functions throw EOFException.
Print writers support: print, println (unlike Prelude.print, they take strings, not instances of Show), putChar and write.
It is now possible to deal with overloaded java methods or constructors by simply declaring multiple types for their Frege counterparts:
data Writer = native java.io.Writer where
native write :: Writer -> Int -> IO () throws IOException
| Writer -> String -> IO () throws IOException
| Writer -> String -> Int -> Int -> IO () throws IOException
In the type checker, the left-right bias for constructs like x.m
has been removed. This used to
type check only if the type of x
was known before the type checker hit the expression. Now, resolution
will succeed if the type of x
is known in the surrounding function at all.
Exception handling has been re-worked, as will be described in the upcoming overhauled language spec. In short:
- Native functions can now catch different exceptions
This enhances the catch-all approach taken up to now.
The exceptions are encoded in an Either
type as before, but if there is more than one exception, the
left part of the Either
type will be another Either
, like
Either (Either (Either Ex1 Ex2) Ex3) Result
This either catches one of the exceptions Ex1
, Ex2
or Ex3
, or returns a result of type Result
A bit syntactic sugar helps with such complex types: Firstly, the notation
(Ex1|Ex2|Ex3|Result)
is equivalent to the above type, in fact, every Either
type can be written (a|b)
The vertical bar acts like a left associative operator hence (a|b|c) = ((a|b)|c)
Second, the function either
is now itself left associative, and there is the nice property
that one can deconstruct a nested Either
value whose type is (a|b|c)
with
(fa `either` fb `either` fc) value
Take the type, replace each vertical bar with an infix either
and each sub-type with a function that takes this type and apply this to the value to get deconstructed! This works for any nesting depth.
- Exception handling in the ST/IO Monad
Native impure functions can now have throws
clauses, where the thrown exception types can be listed.
An exception type is an immutable abstract type based on some java exception, like:
data IOException = pure native java.io.IOException
derive Exceptional IOException
Support for exception handling is in module frege.prelude.PreludeIO
with functions catch
finally
try
and throwST
(these functions are available by default).
This works now like described in detail in the upcoming language spec.
We now have the functions enumFrom
, enumFromThen
, enumFromTo
, enumFromThenTo
in class Enum
, and arithmetic sequences are syntactically supported.
All forms of arithmetic sequences of trivial types like ()
produce a singleton list where the list element is the only possible value of that type.
Float
and Double
do not have instances for Enum
, hence there are no floating point arithmetic sequences. (Though there is nothing that prevents one from rolling one's own.)
Available from 3.21.33-g5e7b72f
January 8, 2013: New runtime system, minor library changes, strict/non-strict fields in constructors
The new runtime system is more othogonal than the previous one, and allows faster code generation. Unless you want to call frege code from java, it is of no concern. Except that now native methods that are declared to return lists are expected to return arrays or Iterables, and the result is converted to a frege list no matter what the element type of the list is. The data in the java array/Iterable must of course be assignment-compatible with the given element type.
In the course of sweeping up, the package frege.rt
is gone, and has been replaced by frege.runtime
. Also, there are no more java classes frege.RT
and frege.MD
, the corresponging code is now in frege.runtime.Runtime
and frege.runtime.Meta
.
Likewise, the frege modules frege.j.
have moved to frege.java.
. The module frege.IO
is considered deprecated and will be removed in the near future. All Java SE stuff can be imported henceforth from frege.java.
Xyz where Xyz is the capitalized last component of the Java package name.
Until now it was possible to state that all fields of a data constructor should be strict, like in:
data Foo = !Bar {field1::X, field2::Y}
From now on, this syntax is considered deprecated (yet it will continue to work for a while). Instead, individual fields can now be marked strict:
data Foo = Bar {!field1::X, !field2::Y} -- new syntax for the above
This makes it possible to have strict as well as non-strict fields in one constructor.
Last but not least, the version number now contains the first 7 digits of the commit SHA, so that it may be easier to look up what changes went in a given version. For example, the version 3.21.8-geeec484 corresponds to eeec484d18724617e4baf0cab4206ac3a03fff21.
October 31, 2012: Type checker aware of Java sub/superclass relationships, support for returned arrays and Iterables
Up to now, the type checker checked native types just like any other frege type. Very much simplified this boils down to check whether type constructors match.
From now on, if two native types are matched, the type checker considers it ok if the java type associated with the expected type is a super class or super interface of the one associated with the inferred type.
In addition, if we have x.m
and the type of x
is some T ...
and T
is a native type, all known namespaces of types that are associated with the java type that is associated with T
are searched for m
.
Here is a short example:
package frege.java.Net where
data URL s = native java.net.URL where
native new :: String -> IO (Exception (URL Immutable))
native openStream :: URL Immutable -> IO (Exception (InputStream RealWorld))
-- more methods ommitted
protected data InputStream a = native java.io.InputStream
This defines the URL
type. The return type of openStream
is InputStream
.
Another file contains this:
package frege.java.IO where
--- The normal form of an InputStream -- 'RealWorld' (mutated by IO actions)
type InputStream = InputStreamT RealWorld
type Closeable = CloseableT RealWorld
--- frege equivalent of @java.io.InputStream@
data InputStreamT s = native java.io.InputStream where
native read :: InputStream -> IO (Exception Int)
-- more methods ommitted
--- frege equivalent of @java.io.Closeable@
data CloseableT s = native java.io.Closeable where
native close :: Closeable -> IO (Exception ())
Here, we have a more detailed definition of InputStream
.
Now we can use this like so:
import Java.Net
import Java.IO
main _ = do
url <- URL.new "http://www.google.com" >>= either throw return
ios <- url.openStream >>= either throw return
byte <- ios.read >>= either throw println
ios.close
return ()
The variable ios
has type Net.InputStream RealWorld
, but because the type constructor
is associated with java.io.InputStream
also IO.InputStreamT
is searched for method read
, which is
found there.
This method expects a IO.InputStreamT RealWorld
but the different type constructor is ok, because
it is also associated with java.io.InputStream
.
The same mechanism works with super-classes and super-interfaces. The expression ios.close
looks for close
in all types that are associated with any known super-class or super-interface of java.io.InputStream
, and since the latter implements java.io.Closable
it is found there.
The rewritten expression ClosableT.close ios
type checks, because the inferred type for ios
, namely Net.InputStream
denotes a java type that is a subtype of the java type that is associated with the expected type ClosableT RealWorld
.
This sounds more difficult than it is. In reality, the Frege compiler does some reflection on all java types it learns from the program text or imported packages and tries to do what one expects anyway.
This gives us the opportunity to more easily interface with Java.
We don't need to model the class hierarchy anymore with Frege type classes.
Rather, we just name the classes and methods we're interested in.
In addition, we should never face the situation where we want to import package java.A
from java.B
, while at the same time we need to import java.B
from java.A
. We can just use simple "forward" declarations (like the one for InputStream
in frege.java.Net
) in java.A
for all java types introduced in java.B
and vice versa.
Another change is that a native method whose declared return type is [R]
where R
is a type associated with a Java reference type J
, will be good for corresponding java methods that return a J[]
or an Iterable<J>
. In both cases the result is converted to a list. If the method returns null
this will result in an empty list. Any null
values in the array or the Iterable
are skipped.
The conversion is strict in the case of arrays, but lazy with Iterable
. The latter is because we can traverse the Iterable
only once. Hence the list would come out in reverse order if we created it all at once.
The changes will be available from version 3.20.42 on, this corresponds to commit 24d21431.
To further enhance Haskell compatibility, the token .
will be regarded as if it was •
if (at least) one of the following conditions holds:
- The token preceeding
.
is(
- The token following
.
is)
- The tokens on both side of
.
are separated by whitespace.
Note that we speak here about the token .
not about the character.
Hence, the dots in qualified names and multi-character operators like ..
are not affected.
Here is an example:
x = (.) f g -- by rules 1 and 2 same as (•) f g
x = (f.) g -- by rule 2 same as (f•) g
x = (.g) f -- by rule 1 same as (•g) f
x = f . g -- by rule 3 same as f • g
Note that it is not possible to mention a single .
in an infix declaration.
This would be like saying: We have 3 planets between sun and earth: merkur, the morning star and the evening star.
It is possible that this change breaks existing code. Yet, in the some 30.000 lines of the frege system there was only 1 instance where this happened. Hence, it looks like the benefits outweigh the possible harm.
There are no restrictions anymore regarding function instances. All of the following can be used now:
instance C1 (->)
instance C2 (->) a -- sorry, not (a ->)
instance C3 (a -> b) -- or, likewise (->) a b
The type of any record field can now be polymorphic, i.e. it may introduce type variables that are not parameters of the data type the constructor that contains the record field belongs to. Here is a simple example:
data F b = F { fun :: forall a. b a -> b a }
foo :: F []
foo = F reverse
bar :: F m -> m a -> m b -> (m a, m b)
bar F{fun} as bs = (fun as, fun bs)
baz = bar foo [0,1] [true,false] -- ([1,0], [false, true])
drp = bar (F (drop 1)) [0,1] [true,false] -- ([1], [false])
The example is a bit contrived insofar as bar
could be made simpler by just taking a rank 2 function instead of a record that holds a rank 2 function.
Home
News
Community
- Online Communities
- Frege Ecosystem
- Frege Day 2015
- Protocol
- Simon Peyton-Jones Transcript
- Talks
- Articles
- Books
- Courses
Documentation
- Getting Started
- Online REPL
- FAQ
- Language and API Reference
- Libraries
- Language Interoperability
- Calling Frege From Java (old)
- Calling Java From Frege
- Calling Frege From Java (new)
- Compiler Manpage
- Source Code Doc
- Contributing
- System Properties
- License
- IDE Support
- Eclipse
- Intellij
- VS Code and Language Server
- Haskell
- Differences
- GHC Options vs Frege
- Learn You A Haskell Adaptations
- Official Doc
Downloads