Fossil

Check-in [346d62a4]
Login

Many hyperlinks are disabled.
Use anonymous login to enable hyperlinks.

Overview
Comment:merge cleanX
Downloads: Tarball | ZIP archive
Timelines: family | ancestors | descendants | both | cleanX-no-clean-glob
Files: files | file ages | folders
SHA1: 346d62a4118d868bf4178e59946a64c6c23d369c
User & Date: jan.nijtmans 2015-11-03 05:47:00.000
Context
2015-11-03
23:50
merge cleanX ... (check-in: 607bc737 user: jan.nijtmans tags: cleanX-no-clean-glob)
05:47
merge cleanX ... (check-in: 346d62a4 user: jan.nijtmans tags: cleanX-no-clean-glob)
04:47
merge trunk ... (check-in: 23024b4a user: jan.nijtmans tags: cleanX)
2015-07-14
19:55
merge trunk ... (check-in: cac5cbae user: jan.nijtmans tags: cleanX-no-clean-glob)
Changes
Unified Diff Ignore Whitespace Patch
Added .fossil-settings/clean-glob.


































>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
*.a
*.lib
*.manifest
*.o
*.obj
*.pdb
*.res
Makefile
bld/*
wbld/*
win/*.c
win/*.h
win/*.exe
win/headers
win/linkopts
autoconfig.h
config.log
Changes to .fossil-settings/ignore-glob.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
compat/openssl*
compat/tcl*
*.a
*.lib
*.manifest
*.o
*.obj
*.pdb
*.res
Makefile
bld/*
wbld/*
win/*.c
win/*.h
win/*.exe
win/headers
win/linkopts
autoconfig.h
config.log
fossil
fossil.exe
win/fossil.exe


<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<



1
2

















3
4
5
compat/openssl*
compat/tcl*

















fossil
fossil.exe
win/fossil.exe
Changes to Dockerfile.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

16
17
18
19
20
21
22
23
24
25
26
###
#   Dockerfile for Fossil
###
FROM fedora:21

### Now install some additional parts we will need for the build
RUN yum update -y && yum install -y gcc make zlib-devel openssl-devel tar && yum clean all && groupadd -r fossil -g 433 && useradd -u 431 -r -g fossil -d /opt/fossil -s /sbin/nologin -c "Fossil user" fossil

### If you want to build "release", change the next line accordingly.
ENV FOSSIL_INSTALL_VERSION trunk

RUN curl "http://core.tcl.tk/tcl/tarball/tcl-src.tar.gz?name=tcl-src&uuid=release" | tar zx
RUN cd tcl-src/unix && ./configure --prefix=/usr --disable-shared --disable-threads --disable-load && make && make install
RUN curl "http://www.fossil-scm.org/index.html/tarball/fossil-src.tar.gz?name=fossil-src&uuid=${FOSSIL_INSTALL_VERSION}" | tar zx
RUN cd fossil-src && ./configure --disable-fusefs --json --with-th1-docs --with-th1-hooks --with-tcl

RUN cd fossil-src && make && strip fossil && cp fossil /usr/bin && cd .. && rm -rf fossil-src && chmod a+rx /usr/bin/fossil && mkdir -p /opt/fossil && chown fossil:fossil /opt/fossil

### Build is done, remove modules no longer needed
RUN yum remove -y gcc make zlib-devel openssl-devel tar && yum clean all

USER fossil

ENV HOME /opt/fossil

EXPOSE 8080




|


|

|
|





>



|







1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
###
#   Dockerfile for Fossil
###
FROM fedora:22

### Now install some additional parts we will need for the build
RUN dnf update -y && dnf install -y gcc make zlib-devel openssl-devel tar && dnf clean all && groupadd -r fossil -g 433 && useradd -u 431 -r -g fossil -d /opt/fossil -s /sbin/nologin -c "Fossil user" fossil

### If you want to build "trunk", change the next line accordingly.
ENV FOSSIL_INSTALL_VERSION release

RUN curl "http://core.tcl.tk/tcl/tarball/tcl-src.tar.gz?name=tcl-src&uuid=release" | tar zx
RUN cd tcl-src/unix && ./configure --prefix=/usr --disable-shared --disable-threads --disable-load && make && make install
RUN curl "http://www.fossil-scm.org/index.html/tarball/fossil-src.tar.gz?name=fossil-src&uuid=${FOSSIL_INSTALL_VERSION}" | tar zx
RUN cd fossil-src && ./configure --disable-fusefs --json --with-th1-docs --with-th1-hooks --with-tcl
RUN cd fossil-src/src && mv main.c main.c.orig && sed s/\"now\"/0/ <main.c.orig >main.c
RUN cd fossil-src && make && strip fossil && cp fossil /usr/bin && cd .. && rm -rf fossil-src && chmod a+rx /usr/bin/fossil && mkdir -p /opt/fossil && chown fossil:fossil /opt/fossil

### Build is done, remove modules no longer needed
RUN dnf remove -y gcc make zlib-devel openssl-devel tar && dnf clean all

USER fossil

ENV HOME /opt/fossil

EXPOSE 8080

Changes to Makefile.classic.
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
TCC += -DFOSSIL_DYNAMIC_BUILD=1

#### Extra arguments for linking the finished binary.  Fossil needs
#    to link against the Z-Lib compression library unless the miniz
#    library in the source tree is being used.  There are no other
#    required dependencies.
ZLIB_LIB.0 = -lz
ZLIB_LIB.1 = 
ZLIB_LIB.  = $(ZLIB_LIB.0)

# If using zlib:
LIB += $(ZLIB_LIB.$(FOSSIL_ENABLE_MINIZ)) $(LDFLAGS)

# If using HTTPS:
LIB += -lcrypto -lssl







|







50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
TCC += -DFOSSIL_DYNAMIC_BUILD=1

#### Extra arguments for linking the finished binary.  Fossil needs
#    to link against the Z-Lib compression library unless the miniz
#    library in the source tree is being used.  There are no other
#    required dependencies.
ZLIB_LIB.0 = -lz
ZLIB_LIB.1 =
ZLIB_LIB.  = $(ZLIB_LIB.0)

# If using zlib:
LIB += $(ZLIB_LIB.$(FOSSIL_ENABLE_MINIZ)) $(LDFLAGS)

# If using HTTPS:
LIB += -lcrypto -lssl
Changes to VERSION.
1
1.33
|
1
1.34
Changes to ajax/i-test/rhino-test.js.
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
        if(!TestApp.verbose) return;
        print("ERROR: "+WhAjaj.stringify(opt));
    };
    cb.onResponse = function(resp,req){
        if(!TestApp.verbose) return;
        print("GOT RESPONSE: "+(('string'===typeof resp) ? resp : WhAjaj.stringify(resp)));
    };
    
})();

/**
    Throws an exception of cond is a falsy value.
*/
function assert(cond, descr){
    descr = descr || "Undescribed condition.";







|







40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
        if(!TestApp.verbose) return;
        print("ERROR: "+WhAjaj.stringify(opt));
    };
    cb.onResponse = function(resp,req){
        if(!TestApp.verbose) return;
        print("GOT RESPONSE: "+(('string'===typeof resp) ? resp : WhAjaj.stringify(resp)));
    };

})();

/**
    Throws an exception of cond is a falsy value.
*/
function assert(cond, descr){
    descr = descr || "Undescribed condition.";
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
}
testHAI.description = 'Get server version info.';

function testIAmNobody(){
    TestApp.fossil.whoami('/json/whoami');
    assert('nobody' === TestApp.fossil.auth.name, 'User == nobody.' );
    assert(!TestApp.fossil.auth.authToken, 'authToken is not set.' );
   
}
testIAmNobody.description = 'Ensure that current user is "nobody".';


function testAnonymousLogin(){
    TestApp.fossil.login();
    assert('string' === typeof TestApp.fossil.auth.authToken, 'authToken = '+TestApp.fossil.auth.authToken);







|







125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
}
testHAI.description = 'Get server version info.';

function testIAmNobody(){
    TestApp.fossil.whoami('/json/whoami');
    assert('nobody' === TestApp.fossil.auth.name, 'User == nobody.' );
    assert(!TestApp.fossil.auth.authToken, 'authToken is not set.' );

}
testIAmNobody.description = 'Ensure that current user is "nobody".';


function testAnonymousLogin(){
    TestApp.fossil.login();
    assert('string' === typeof TestApp.fossil.auth.authToken, 'authToken = '+TestApp.fossil.auth.authToken);
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
    osb.write(json,0, json.length);
    osb.close();
    req = json = outs = osr = osb = undefined;
    var ins = p.getInputStream();
    var isr = new java.io.InputStreamReader(ins);
    var br = new java.io.BufferedReader(isr);
    var line;
    
    while( null !== (line=br.readLine())){
        print(line);
    }
    br.close();
    isr.close();
    ins.close();
    p.waitFor();







|







219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
    osb.write(json,0, json.length);
    osb.close();
    req = json = outs = osr = osb = undefined;
    var ins = p.getInputStream();
    var isr = new java.io.InputStreamReader(ins);
    var br = new java.io.BufferedReader(isr);
    var line;

    while( null !== (line=br.readLine())){
        print(line);
    }
    br.close();
    isr.close();
    ins.close();
    p.waitFor();
Changes to ajax/index.html.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
	"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">

<head>
	<title>Fossil/JSON raw request sending</title>
	<meta http-equiv="content-type" content="text/html;charset=utf-8" />
    <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.min.js"></script> 
    <script type="text/javascript" src="js/whajaj.js"></script>
    <script type="text/javascript" src="js/fossil-ajaj.js"></script>

<style type='text/css'>
th {
  text-align: left;
  background-color: #ececec;  
}

.dangerWillRobinson {
    background-color: yellow;
}
</style>








|






|







1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
	"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">

<head>
	<title>Fossil/JSON raw request sending</title>
	<meta http-equiv="content-type" content="text/html;charset=utf-8" />
    <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.min.js"></script>
    <script type="text/javascript" src="js/whajaj.js"></script>
    <script type="text/javascript" src="js/fossil-ajaj.js"></script>

<style type='text/css'>
th {
  text-align: left;
  background-color: #ececec;
}

.dangerWillRobinson {
    background-color: yellow;
}
</style>

Changes to ajax/wiki-editor.html.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
	"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">

<head>
	<title>Fossil/JSON Wiki Editor Prototype</title>
	<meta http-equiv="content-type" content="text/html;charset=utf-8" />
    <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script> 
    <script type="text/javascript" src="js/whajaj.js"></script>
    <script type="text/javascript" src="js/fossil-ajaj.js"></script>

<style type='text/css'>
th {
  text-align: left;
  background-color: #ececec;  
}

.dangerWillRobinson {
    background-color: yellow;
}

.wikiPageLink {







|






|







1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
	"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">

<head>
	<title>Fossil/JSON Wiki Editor Prototype</title>
	<meta http-equiv="content-type" content="text/html;charset=utf-8" />
    <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script>
    <script type="text/javascript" src="js/whajaj.js"></script>
    <script type="text/javascript" src="js/fossil-ajaj.js"></script>

<style type='text/css'>
th {
  text-align: left;
  background-color: #ececec;
}

.dangerWillRobinson {
    background-color: yellow;
}

.wikiPageLink {
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
    TheApp.refreshPageListView = function(){
        var list = (function(){
            var k, v, li = [];
            for( k in TheApp.pages ){
                if(!TheApp.pages.hasOwnProperty(k)) continue;
                li.push(k);
            }
            return li;                
        })();
        var i, p, a, tgt = TheApp.jqe.pageListArea;
        tgt.text('');
        function makeLink(name){
            var link = jQuery('<span></span>');
            link.text(name);
            link.addClass('wikiPageLink');







|







213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
    TheApp.refreshPageListView = function(){
        var list = (function(){
            var k, v, li = [];
            for( k in TheApp.pages ){
                if(!TheApp.pages.hasOwnProperty(k)) continue;
                li.push(k);
            }
            return li;
        })();
        var i, p, a, tgt = TheApp.jqe.pageListArea;
        tgt.text('');
        function makeLink(name){
            var link = jQuery('<span></span>');
            link.text(name);
            link.addClass('wikiPageLink');
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334

See also: <a href='index.html'>main test page</a>.

<br>
<b>Login:</b>
<br/>
<input type='button' value='Anon. Login' onclick='TheApp.cgi.login()' />
or: 
name:<input type='text' id='textUser' value='json-demo' size='12'/>
pw:<input type='password' id='textPassword' value='json-demo' size='12'/>
<input type='button' value='login' onclick='TheApp.cgi.login(jQuery("#textUser").val(),jQuery("#textPassword").val(),{onResponse:TheApp.onLogin})' />
<input type='button' value='logout' onclick='TheApp.cgi.logout()' />

<br/>
<span id='currentAuthToken' style='font-family:monospaced'></span>







|







320
321
322
323
324
325
326
327
328
329
330
331
332
333
334

See also: <a href='index.html'>main test page</a>.

<br>
<b>Login:</b>
<br/>
<input type='button' value='Anon. Login' onclick='TheApp.cgi.login()' />
or:
name:<input type='text' id='textUser' value='json-demo' size='12'/>
pw:<input type='password' id='textPassword' value='json-demo' size='12'/>
<input type='button' value='login' onclick='TheApp.cgi.login(jQuery("#textUser").val(),jQuery("#textPassword").val(),{onResponse:TheApp.onLogin})' />
<input type='button' value='logout' onclick='TheApp.cgi.logout()' />

<br/>
<span id='currentAuthToken' style='font-family:monospaced'></span>
Changes to auto.def.
1
2
3
4
5
6
7
8
9


10
11
12
13
14
15
16
# System autoconfiguration. Try: ./configure --help

use cc cc-lib

options {
    with-openssl:path|auto|none
                         => {Look for OpenSSL in the given path, or auto or none}
    with-miniz=0         => {Use miniz from the source tree}
    with-zlib:path       => {Look for zlib in the given path}


    with-legacy-mv-rm=0  => {Enable legacy behavior for mv/rm (skip checkout files)}
    with-th1-docs=0      => {Enable TH1 for embedded documentation pages}
    with-th1-hooks=0     => {Enable TH1 hooks for commands and web pages}
    with-tcl:path        => {Enable Tcl integration, with Tcl in the specified path}
    with-tcl-stubs=0     => {Enable Tcl integration via stubs library mechanism}
    with-tcl-private-stubs=0
                         => {Enable Tcl integration via private stubs mechanism}









>
>







1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# System autoconfiguration. Try: ./configure --help

use cc cc-lib

options {
    with-openssl:path|auto|none
                         => {Look for OpenSSL in the given path, or auto or none}
    with-miniz=0         => {Use miniz from the source tree}
    with-zlib:path       => {Look for zlib in the given path}
    with-exec-rel-paths=0
                         => {Enable relative paths for external diff/gdiff}
    with-legacy-mv-rm=0  => {Enable legacy behavior for mv/rm (skip checkout files)}
    with-th1-docs=0      => {Enable TH1 for embedded documentation pages}
    with-th1-hooks=0     => {Enable TH1 hooks for commands and web pages}
    with-tcl:path        => {Enable Tcl integration, with Tcl in the specified path}
    with-tcl-stubs=0     => {Enable Tcl integration via stubs library mechanism}
    with-tcl-private-stubs=0
                         => {Enable Tcl integration via private stubs mechanism}
89
90
91
92
93
94
95






96
97
98
99
100
101
102
}

if {[opt-bool with-legacy-mv-rm]} {
    define-append EXTRA_CFLAGS -DFOSSIL_ENABLE_LEGACY_MV_RM
    define FOSSIL_ENABLE_LEGACY_MV_RM
    msg-result "Legacy mv/rm support enabled"
}







if {[opt-bool with-th1-docs]} {
    define-append EXTRA_CFLAGS -DFOSSIL_ENABLE_TH1_DOCS
    define FOSSIL_ENABLE_TH1_DOCS
    msg-result "TH1 embedded documentation support enabled"
}








>
>
>
>
>
>







91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
}

if {[opt-bool with-legacy-mv-rm]} {
    define-append EXTRA_CFLAGS -DFOSSIL_ENABLE_LEGACY_MV_RM
    define FOSSIL_ENABLE_LEGACY_MV_RM
    msg-result "Legacy mv/rm support enabled"
}

if {[opt-bool with-exec-rel-paths]} {
    define-append EXTRA_CFLAGS -DFOSSIL_ENABLE_EXEC_REL_PATHS
    define FOSSIL_ENABLE_EXEC_REL_PATHS
    msg-result "Relative paths in external diff/gdiff enabled"
}

if {[opt-bool with-th1-docs]} {
    define-append EXTRA_CFLAGS -DFOSSIL_ENABLE_TH1_DOCS
    define FOSSIL_ENABLE_TH1_DOCS
    msg-result "TH1 embedded documentation support enabled"
}

154
155
156
157
158
159
160

161
162
163














164


165
166
167







168
169

170
171
172
173
174
175
176
177
        define FOSSIL_ENABLE_TCL_STUBS
        define USE_TCL_STUBS
    } else {
        set libs "$tclconfig(TCL_LIB_SPEC) $tclconfig(TCL_LIBS)"
    }
    set cflags $tclconfig(TCL_INCLUDE_SPEC)
    if {!$tclprivatestubs} {

        cc-with [list -cflags $cflags -libs $libs] {
            if {$tclstubs} {
                if {![cc-check-functions Tcl_InitStubs]} {














                    user-error "Cannot find a usable Tcl stubs library $msg"


                }
            } else {
                if {![cc-check-functions Tcl_CreateInterp]} {







                    user-error "Cannot find a usable Tcl library $msg"
                }

            }
        }
    }
    set version $tclconfig(TCL_VERSION)$tclconfig(TCL_PATCH_LEVEL)
    msg-result "Found Tcl $version at $tclconfig(TCL_PREFIX)"
    if {!$tclprivatestubs} {
        define-append LIBS $libs
    }







>


|
>
>
>
>
>
>
>
>
>
>
>
>
>
>
|
>
>
|
|
|
>
>
>
>
>
>
>
|
|
>








162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
        define FOSSIL_ENABLE_TCL_STUBS
        define USE_TCL_STUBS
    } else {
        set libs "$tclconfig(TCL_LIB_SPEC) $tclconfig(TCL_LIBS)"
    }
    set cflags $tclconfig(TCL_INCLUDE_SPEC)
    if {!$tclprivatestubs} {
        set foundtcl 0; # Did we find a working Tcl library?
        cc-with [list -cflags $cflags -libs $libs] {
            if {$tclstubs} {
                if {[cc-check-functions Tcl_InitStubs]} {
                    set foundtcl 1
                }
            } else {
                if {[cc-check-functions Tcl_CreateInterp]} {
                    set foundtcl 1
                }
            }
        }
        if {!$foundtcl && [string match *-lieee* $libs]} {
            # On some systems, using "-lieee" from TCL_LIB_SPEC appears
            # to cause issues.
            msg-result "Removing \"-lieee\" and retrying for Tcl..."
            set libs [string map [list -lieee ""] $libs]
            cc-with [list -cflags $cflags -libs $libs] {
                if {$tclstubs} {
                    if {[cc-check-functions Tcl_InitStubs]} {
                        set foundtcl 1
                    }
                } else {
                    if {[cc-check-functions Tcl_CreateInterp]} {
                        set foundtcl 1
                    }
                }
            }
        }
        if {!$foundtcl} {
            if {$tclstubs} {
                user-error "Cannot find a usable Tcl stubs library $msg"
            } else {
                user-error "Cannot find a usable Tcl library $msg"
            }
        }
    }
    set version $tclconfig(TCL_VERSION)$tclconfig(TCL_PATCH_LEVEL)
    msg-result "Found Tcl $version at $tclconfig(TCL_PREFIX)"
    if {!$tclprivatestubs} {
        define-append LIBS $libs
    }
Changes to fossil.1.
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
\fICOMMAND [OPTIONS]\fR
.SH DESCRIPTION
Fossil is a distributed version control system (DVCS) with built-in
wiki, ticket tracker, CGI/http interface, and http server.

.SH Common COMMANDs:

add            clean          import         pull           stash 
.br
addremove      clone          info           purge          status
.br
all            commit         init           push           sync
.br
annotate       diff           json           rebuild        tag
.br







|







12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
\fICOMMAND [OPTIONS]\fR
.SH DESCRIPTION
Fossil is a distributed version control system (DVCS) with built-in
wiki, ticket tracker, CGI/http interface, and http server.

.SH Common COMMANDs:

add            clean          import         pull           stash
.br
addremove      clone          info           purge          status
.br
all            commit         init           push           sync
.br
annotate       diff           json           rebuild        tag
.br
Changes to setup/fossil.nsi.
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
Name "Fossil"

; The file to write
OutFile "fossil-setup.exe"

; The default installation directory
InstallDir $PROGRAMFILES\Fossil
; Registry key to check for directory (so if you install again, it will 
; overwrite the old one automatically)
InstallDirRegKey HKLM SOFTWARE\Fossil "Install_Dir"

; The text to prompt the user to enter a directory
ComponentText "This will install fossil on your computer."
; The text to prompt the user to enter a directory
DirText "Choose a directory to install in to:"







|







10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
Name "Fossil"

; The file to write
OutFile "fossil-setup.exe"

; The default installation directory
InstallDir $PROGRAMFILES\Fossil
; Registry key to check for directory (so if you install again, it will
; overwrite the old one automatically)
InstallDirRegKey HKLM SOFTWARE\Fossil "Install_Dir"

; The text to prompt the user to enter a directory
ComponentText "This will install fossil on your computer."
; The text to prompt the user to enter a directory
DirText "Choose a directory to install in to:"
Changes to skins/README.md.
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41

   5.   Type "make" to rebuild.

Development Hints
-----------------

One way to develop a new skin is to copy the baseline files (css.txt,
details.txt, footer.txt, and header.txt) into a working directory $WORKDIR 
then launch Fossil with a command-line option "--skin $WORKDIR".  Example:

        cp -r skins/default newskin
        fossil ui --skin ./newskin

When the argument to --skin contains one or more '/' characters, the
appropriate skin files are read from disk from the directory specified.







|







27
28
29
30
31
32
33
34
35
36
37
38
39
40
41

   5.   Type "make" to rebuild.

Development Hints
-----------------

One way to develop a new skin is to copy the baseline files (css.txt,
details.txt, footer.txt, and header.txt) into a working directory $WORKDIR
then launch Fossil with a command-line option "--skin $WORKDIR".  Example:

        cp -r skins/default newskin
        fossil ui --skin ./newskin

When the argument to --skin contains one or more '/' characters, the
appropriate skin files are read from disk from the directory specified.
Changes to skins/blitz/css.txt.
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
  border-right: 1px solid #ddd;
}

tr.timelineSelected {
  border-left: 2px solid orange;
  background-color: #ffffe8;
  border-bottom: 1px solid #ddd;
  border-right: 1px solid #ddd;  
}

tr.timelineCurrent td.timelineTableCell {
}

tr.timelineSpacer {
}







|







1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
  border-right: 1px solid #ddd;
}

tr.timelineSelected {
  border-left: 2px solid orange;
  background-color: #ffffe8;
  border-bottom: 1px solid #ddd;
  border-right: 1px solid #ddd;
}

tr.timelineCurrent td.timelineTableCell {
}

tr.timelineSpacer {
}
Changes to skins/blitz/ticket.txt.
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
              username AS xusername
         FROM ticketchng
        WHERE tkt_id=$tkt_id AND length(icomment)>0} {
          if {$seenRow eq "0"} {
            html "<h5>User Comments</h5>\n"
            set seenRow 1
          }
  html "<div class='tktComment'>\n"          
  html "<div class='tktCommentHeader'>\n"
  html "<div class='pull-right'>$xdate</div>\n"
  html "<span class='tktCommentLogin'>[htmlize $xlogin]</span>"
  if {$xlogin ne $xusername && [string length $xusername]>0} {
    html " (claiming to be <span class='tktCommentLogin'>[htmlize $xusername]</span>)"
  }
  html " commented</div>\n"







|







83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
              username AS xusername
         FROM ticketchng
        WHERE tkt_id=$tkt_id AND length(icomment)>0} {
          if {$seenRow eq "0"} {
            html "<h5>User Comments</h5>\n"
            set seenRow 1
          }
  html "<div class='tktComment'>\n"
  html "<div class='tktCommentHeader'>\n"
  html "<div class='pull-right'>$xdate</div>\n"
  html "<span class='tktCommentLogin'>[htmlize $xlogin]</span>"
  if {$xlogin ne $xusername && [string length $xusername]>0} {
    html " (claiming to be <span class='tktCommentLogin'>[htmlize $xusername]</span>)"
  }
  html " commented</div>\n"
Changes to skins/blitz_no_logo/ticket.txt.
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
              username AS xusername
         FROM ticketchng
        WHERE tkt_id=$tkt_id AND length(icomment)>0} {
          if {$seenRow eq "0"} {
            html "<h5>User Comments</h5>\n"
            set seenRow 1
          }
  html "<div class='tktComment'>\n"          
  html "<div class='tktCommentHeader'>\n"
  html "<div class='pull-right'>$xdate</div>\n"
  html "<span class='tktCommentLogin'>[htmlize $xlogin]</span>"
  if {$xlogin ne $xusername && [string length $xusername]>0} {
    html " (claiming to be <span class='tktCommentLogin'>[htmlize $xusername]</span>)"
  }
  html " commented</div>\n"







|







83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
              username AS xusername
         FROM ticketchng
        WHERE tkt_id=$tkt_id AND length(icomment)>0} {
          if {$seenRow eq "0"} {
            html "<h5>User Comments</h5>\n"
            set seenRow 1
          }
  html "<div class='tktComment'>\n"
  html "<div class='tktCommentHeader'>\n"
  html "<div class='pull-right'>$xdate</div>\n"
  html "<span class='tktCommentLogin'>[htmlize $xlogin]</span>"
  if {$xlogin ne $xusername && [string length $xusername]>0} {
    html " (claiming to be <span class='tktCommentLogin'>[htmlize $xusername]</span>)"
  }
  html " commented</div>\n"
Changes to skins/xekri/css.txt.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
/******************************************************************************
 * Xekri
 *
 * To adjust the width of the contents for this skin, look for the "max-width" 
 * property and change its value.  (It's in the "Main Area" section)  The value 
 * determines how much of the browser window to use.  Some like 100%, so that 
 * the entire window is used.  Others prefer 80%, which makes the contents 
 * easier to read for them.
 */


/**************************************
 * General HTML
 */



|
|
|
|







1
2
3
4
5
6
7
8
9
10
11
12
13
14
/******************************************************************************
 * Xekri
 *
 * To adjust the width of the contents for this skin, look for the "max-width"
 * property and change its value.  (It's in the "Main Area" section)  The value
 * determines how much of the browser window to use.  Some like 100%, so that
 * the entire window is used.  Others prefer 80%, which makes the contents
 * easier to read for them.
 */


/**************************************
 * General HTML
 */
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
span.wikiruleHead {
  font-weight: bold;
}


/* format for user color input on checkin edit page */
input.checkinUserColor {
  /* no special definitions, class defined, to enable color pickers, 
  * f.e.:
  * ** add the color picker found at http:jscolor.com as java script 
  * include
  * ** to the header and configure the java script file with
  * ** 1. use as bindClass :checkinUserColor
  * ** 2. change the default hash adding behaviour to ON
  * ** or change the class defition of element identified by 
  * id="clrcust"
  * ** to a standard jscolor definition with java script in the footer. 
  * */
}

/* format for end of content area, to be used to clear page flow. */
div.endContent {
  clear: both;
}







|

|




|

|







929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
span.wikiruleHead {
  font-weight: bold;
}


/* format for user color input on checkin edit page */
input.checkinUserColor {
  /* no special definitions, class defined, to enable color pickers,
  * f.e.:
  * ** add the color picker found at http:jscolor.com as java script
  * include
  * ** to the header and configure the java script file with
  * ** 1. use as bindClass :checkinUserColor
  * ** 2. change the default hash adding behaviour to ON
  * ** or change the class defition of element identified by
  * id="clrcust"
  * ** to a standard jscolor definition with java script in the footer.
  * */
}

/* format for end of content area, to be used to clear page flow. */
div.endContent {
  clear: both;
}
Changes to src/add.c.
692
693
694
695
696
697
698



699
700

701
702
703
704
705
706
707
  const char *zNew,
  int dryRunFlag
){
  int x = db_int(-1, "SELECT deleted FROM vfile WHERE pathname=%Q %s",
                         zNew, filename_collation());
  if( x>=0 ){
    if( x==0 ){



      fossil_fatal("cannot rename '%s' to '%s' since another file named '%s'"
                   " is currently under management", zOrig, zNew, zNew);

    }else{
      fossil_fatal("cannot rename '%s' to '%s' since the delete of '%s' has "
                   "not yet been committed", zOrig, zNew, zNew);
    }
  }
  fossil_print("RENAME %s %s\n", zOrig, zNew);
  if( !dryRunFlag ){







>
>
>
|
|
>







692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
  const char *zNew,
  int dryRunFlag
){
  int x = db_int(-1, "SELECT deleted FROM vfile WHERE pathname=%Q %s",
                         zNew, filename_collation());
  if( x>=0 ){
    if( x==0 ){
      if( !filenames_are_case_sensitive() && fossil_stricmp(zOrig,zNew)==0 ){
        /* Case change only */
      }else{
        fossil_fatal("cannot rename '%s' to '%s' since another file named '%s'"
                     " is currently under management", zOrig, zNew, zNew);
      }
    }else{
      fossil_fatal("cannot rename '%s' to '%s' since the delete of '%s' has "
                   "not yet been committed", zOrig, zNew, zNew);
    }
  }
  fossil_print("RENAME %s %s\n", zOrig, zNew);
  if( !dryRunFlag ){
722
723
724
725
726
727
728

729
730
731
732
733
734

735


736
737

738
739
740
741
742
743
744
static void add_file_to_move(
  const char *zOldName, /* The old name of the file on disk. */
  const char *zNewName  /* The new name of the file on disk. */
){
  static int tableCreated = 0;
  Blob fullOldName;
  Blob fullNewName;

  if( !tableCreated ){
    db_multi_exec("CREATE TEMP TABLE fmove(x TEXT PRIMARY KEY %s, y TEXT %s)",
                  filename_collation(), filename_collation());
    tableCreated = 1;
  }
  file_tree_name(zOldName, &fullOldName, 1, 1);

  file_tree_name(zNewName, &fullNewName, 1, 1);


  db_multi_exec("INSERT INTO fmove VALUES('%q','%q');",
                blob_str(&fullOldName), blob_str(&fullNewName));

  blob_reset(&fullNewName);
  blob_reset(&fullOldName);
}

/*
** This function moves files within the checkout, using the file names
** contained in the temporary table "fmove".  The temporary table is







>






>

>
>
|
<
>







726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744

745
746
747
748
749
750
751
752
static void add_file_to_move(
  const char *zOldName, /* The old name of the file on disk. */
  const char *zNewName  /* The new name of the file on disk. */
){
  static int tableCreated = 0;
  Blob fullOldName;
  Blob fullNewName;
  char *zOld, *zNew;
  if( !tableCreated ){
    db_multi_exec("CREATE TEMP TABLE fmove(x TEXT PRIMARY KEY %s, y TEXT %s)",
                  filename_collation(), filename_collation());
    tableCreated = 1;
  }
  file_tree_name(zOldName, &fullOldName, 1, 1);
  zOld = blob_str(&fullOldName);
  file_tree_name(zNewName, &fullNewName, 1, 1);
  zNew = blob_str(&fullNewName);
  if( filenames_are_case_sensitive() || fossil_stricmp(zOld,zNew)!=0 ){
    db_multi_exec("INSERT INTO fmove VALUES('%q','%q');", zOld, zNew);

  }
  blob_reset(&fullNewName);
  blob_reset(&fullOldName);
}

/*
** This function moves files within the checkout, using the file names
** contained in the temporary table "fmove".  The temporary table is
Changes to src/bisect.c.
175
176
177
178
179
180
181

182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207

208
209
210
211
212
213
214

215
216

217
218
219
220
221
222
223
224
225
226
** sorted either chronologically by bisect time, or by check-in time.
*/
static void bisect_chart(int sortByCkinTime){
  char *zLog = db_lget("bisect-log","");
  Blob log, id;
  Stmt q;
  int cnt = 0;

  blob_init(&log, zLog, -1);
  db_multi_exec(
     "CREATE TEMP TABLE bilog("
     "  seq INTEGER PRIMARY KEY,"  /* Sequence of events */
     "  stat TEXT,"                /* Type of occurrence */
     "  rid INTEGER"               /* Check-in number */
     ");"
  );
  db_prepare(&q, "INSERT OR IGNORE INTO bilog(seq,stat,rid)"
                 " VALUES(:seq,:stat,:rid)");
  while( blob_token(&log, &id) ){
    int rid = atoi(blob_str(&id));
    db_bind_int(&q, ":seq", ++cnt);
    db_bind_text(&q, ":stat", rid>0 ? "GOOD" : "BAD");
    db_bind_int(&q, ":rid", rid>=0 ? rid : -rid);
    db_step(&q);
    db_reset(&q);
  }
  db_bind_int(&q, ":seq", ++cnt);
  db_bind_text(&q, ":stat", "CURRENT");
  db_bind_int(&q, ":rid", db_lget_int("checkout", 0));
  db_step(&q);
  db_finalize(&q);
  db_prepare(&q,
    "SELECT bilog.seq, bilog.stat,"
    "       substr(blob.uuid,1,16), datetime(event.mtime)"

    "  FROM bilog, blob, event"
    " WHERE blob.rid=bilog.rid AND event.objid=bilog.rid"
    "   AND event.type='ci'"
    " ORDER BY %s bilog.rowid ASC",
    (sortByCkinTime ? "event.mtime DESC, " : "")
  );
  while( db_step(&q)==SQLITE_ROW ){

    fossil_print("%3d %-7s %s %s\n",
        db_column_int(&q, 0),

        db_column_text(&q, 1),
        db_column_text(&q, 3),
        db_column_text(&q, 2));
  }
  db_finalize(&q);
}

/*
** COMMAND: bisect
**







>





|














|




|
>




|


>
|

>
|
|
|







175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
** sorted either chronologically by bisect time, or by check-in time.
*/
static void bisect_chart(int sortByCkinTime){
  char *zLog = db_lget("bisect-log","");
  Blob log, id;
  Stmt q;
  int cnt = 0;
  int iCurrent = db_lget_int("checkout",0);
  blob_init(&log, zLog, -1);
  db_multi_exec(
     "CREATE TEMP TABLE bilog("
     "  seq INTEGER PRIMARY KEY,"  /* Sequence of events */
     "  stat TEXT,"                /* Type of occurrence */
     "  rid INTEGER UNIQUE"        /* Check-in number */
     ");"
  );
  db_prepare(&q, "INSERT OR IGNORE INTO bilog(seq,stat,rid)"
                 " VALUES(:seq,:stat,:rid)");
  while( blob_token(&log, &id) ){
    int rid = atoi(blob_str(&id));
    db_bind_int(&q, ":seq", ++cnt);
    db_bind_text(&q, ":stat", rid>0 ? "GOOD" : "BAD");
    db_bind_int(&q, ":rid", rid>=0 ? rid : -rid);
    db_step(&q);
    db_reset(&q);
  }
  db_bind_int(&q, ":seq", ++cnt);
  db_bind_text(&q, ":stat", "CURRENT");
  db_bind_int(&q, ":rid", iCurrent);
  db_step(&q);
  db_finalize(&q);
  db_prepare(&q,
    "SELECT bilog.seq, bilog.stat,"
    "       substr(blob.uuid,1,16), datetime(event.mtime),"
    "       blob.rid==%d"
    "  FROM bilog, blob, event"
    " WHERE blob.rid=bilog.rid AND event.objid=bilog.rid"
    "   AND event.type='ci'"
    " ORDER BY %s bilog.rowid ASC",
    iCurrent, (sortByCkinTime ? "event.mtime DESC, " : "")
  );
  while( db_step(&q)==SQLITE_ROW ){
    const char *zGoodBad = db_column_text(&q, 1);
    fossil_print("%3d %-7s %s %s%s\n",
        db_column_int(&q, 0),
        zGoodBad,
        db_column_text(&q, 3),
        db_column_text(&q, 2),
        (db_column_int(&q, 4) && zGoodBad[0]!='C') ? " CURRENT" : "");
  }
  db_finalize(&q);
}

/*
** COMMAND: bisect
**
Changes to src/blob.c.
269
270
271
272
273
274
275

276
277
278
279
280
281
282
  pBlob->xRealloc = blobReallocStatic;
}

/*
** Append text or data to the end of a blob.
*/
void blob_append(Blob *pBlob, const char *aData, int nData){

  blob_is_init(pBlob);
  if( nData<0 ) nData = strlen(aData);
  if( nData==0 ) return;
  if( pBlob->nUsed + nData >= pBlob->nAlloc ){
    pBlob->xRealloc(pBlob, pBlob->nUsed + nData + pBlob->nAlloc + 100);
    if( pBlob->nUsed + nData >= pBlob->nAlloc ){
      blob_panic();







>







269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
  pBlob->xRealloc = blobReallocStatic;
}

/*
** Append text or data to the end of a blob.
*/
void blob_append(Blob *pBlob, const char *aData, int nData){
  assert( aData!=0 || nData==0 );
  blob_is_init(pBlob);
  if( nData<0 ) nData = strlen(aData);
  if( nData==0 ) return;
  if( pBlob->nUsed + nData >= pBlob->nAlloc ){
    pBlob->xRealloc(pBlob, pBlob->nUsed + nData + pBlob->nAlloc + 100);
    if( pBlob->nUsed + nData >= pBlob->nAlloc ){
      blob_panic();
Changes to src/cgi.c.
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
    }
  }

  /* If no match is found and the name begins with an upper-case
  ** letter, then check to see if there is an environment variable
  ** with the given name.
  */
  if( fossil_isupper(zName[0]) ){
    const char *zValue = fossil_getenv(zName);
    if( zValue ){
      cgi_set_parameter_nocopy(zName, zValue, 0);
      CGIDEBUG(("env-match [%s] = [%s]\n", zName, zValue));
      return zValue;
    }
  }







|







1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
    }
  }

  /* If no match is found and the name begins with an upper-case
  ** letter, then check to see if there is an environment variable
  ** with the given name.
  */
  if( zName && fossil_isupper(zName[0]) ){
    const char *zValue = fossil_getenv(zName);
    if( zValue ){
      cgi_set_parameter_nocopy(zName, zValue, 0);
      CGIDEBUG(("env-match [%s] = [%s]\n", zName, zValue));
      return zValue;
    }
  }
Changes to src/checkin.c.
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81

82
83
84
85
86
87
88
      (blob_size(&where)>0) ? "OR" : "AND", zName,
      filename_collation(), zName, filename_collation(),
      zName, filename_collation()
    );
  }

  db_prepare(&q,
    "SELECT pathname, deleted, chnged, rid, coalesce(origname!=pathname,0)"
    "  FROM vfile "
    " WHERE is_selected(id) %s"
    "   AND (chnged OR deleted OR rid=0 OR pathname!=origname)"
    " ORDER BY 1 /*scan*/",
    blob_sql_text(&where)
  );
  blob_zero(&rewrittenPathname);
  while( db_step(&q)==SQLITE_ROW ){
    const char *zPathname = db_column_text(&q,0);
    const char *zDisplayName = zPathname;
    int isDeleted = db_column_int(&q, 1);
    int isChnged = db_column_int(&q,2);
    int isNew = db_column_int(&q,3)==0;
    int isRenamed = db_column_int(&q,4);

    char *zFullName = mprintf("%s%s", g.zLocalRoot, zPathname);
    if( cwdRelative ){
      file_relative_name(zFullName, &rewrittenPathname, 0);
      zDisplayName = blob_str(&rewrittenPathname);
      if( zDisplayName[0]=='.' && zDisplayName[1]=='/' ){
        zDisplayName += 2;  /* no unnecessary ./ prefix */
      }







|














>







60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
      (blob_size(&where)>0) ? "OR" : "AND", zName,
      filename_collation(), zName, filename_collation(),
      zName, filename_collation()
    );
  }

  db_prepare(&q,
    "SELECT pathname, deleted, chnged, rid, coalesce(origname!=pathname,0), islink"
    "  FROM vfile "
    " WHERE is_selected(id) %s"
    "   AND (chnged OR deleted OR rid=0 OR pathname!=origname)"
    " ORDER BY 1 /*scan*/",
    blob_sql_text(&where)
  );
  blob_zero(&rewrittenPathname);
  while( db_step(&q)==SQLITE_ROW ){
    const char *zPathname = db_column_text(&q,0);
    const char *zDisplayName = zPathname;
    int isDeleted = db_column_int(&q, 1);
    int isChnged = db_column_int(&q,2);
    int isNew = db_column_int(&q,3)==0;
    int isRenamed = db_column_int(&q,4);
    int isLink = db_column_int(&q,5);
    char *zFullName = mprintf("%s%s", g.zLocalRoot, zPathname);
    if( cwdRelative ){
      file_relative_name(zFullName, &rewrittenPathname, 0);
      zDisplayName = blob_str(&rewrittenPathname);
      if( zDisplayName[0]=='.' && zDisplayName[1]=='/' ){
        zDisplayName += 2;  /* no unnecessary ./ prefix */
      }
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
        blob_appendf(report, "EXECUTABLE %s\n", zDisplayName);
      }else if( isChnged==7 ){
        blob_appendf(report, "SYMLINK    %s\n", zDisplayName);
      }else if( isChnged==8 ){
        blob_appendf(report, "UNEXEC     %s\n", zDisplayName);
      }else if( isChnged==9 ){
        blob_appendf(report, "UNLINK     %s\n", zDisplayName);
      }else if( file_contains_merge_marker(zFullName) ){
        blob_appendf(report, "CONFLICT   %s\n", zDisplayName);
      }else{
        blob_appendf(report, "EDITED     %s\n", zDisplayName);
      }
    }else if( isRenamed ){
      blob_appendf(report, "RENAMED    %s\n", zDisplayName);
    }else{







|







120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
        blob_appendf(report, "EXECUTABLE %s\n", zDisplayName);
      }else if( isChnged==7 ){
        blob_appendf(report, "SYMLINK    %s\n", zDisplayName);
      }else if( isChnged==8 ){
        blob_appendf(report, "UNEXEC     %s\n", zDisplayName);
      }else if( isChnged==9 ){
        blob_appendf(report, "UNLINK     %s\n", zDisplayName);
      }else if( !isLink && file_contains_merge_marker(zFullName) ){
        blob_appendf(report, "CONFLICT   %s\n", zDisplayName);
      }else{
        blob_appendf(report, "EDITED     %s\n", zDisplayName);
      }
    }else if( isRenamed ){
      blob_appendf(report, "RENAMED    %s\n", zDisplayName);
    }else{
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450

451
452
453
454
455
456
457
       "       datetime(checkin_mtime(%d,rid),'unixepoch'%s)"
       "  FROM vfile %s"
       " ORDER BY %s",
       vid, timeline_utc(), blob_sql_text(&where), zOrderBy /*safe-for-%s*/
    );
  }else{
    db_prepare(&q,
       "SELECT pathname, deleted, rid, chnged, coalesce(origname!=pathname,0)"
       "  FROM vfile %s"
       " ORDER BY %s", blob_sql_text(&where), zOrderBy /*safe-for-%s*/
    );
  }
  blob_reset(&where);
  while( db_step(&q)==SQLITE_ROW ){
    const char *zPathname = db_column_text(&q,0);
    int isDeleted = db_column_int(&q, 1);
    int isNew = db_column_int(&q,2)==0;
    int chnged = db_column_int(&q,3);
    int renamed = db_column_int(&q,4);

    char *zFullName = mprintf("%s%s", g.zLocalRoot, zPathname);
    const char *type = "";
    if( verboseFlag ){
      if( isNew ){
        type = "ADDED      ";
      }else if( isDeleted ){
        type = "DELETED    ";







|











>







433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
       "       datetime(checkin_mtime(%d,rid),'unixepoch'%s)"
       "  FROM vfile %s"
       " ORDER BY %s",
       vid, timeline_utc(), blob_sql_text(&where), zOrderBy /*safe-for-%s*/
    );
  }else{
    db_prepare(&q,
       "SELECT pathname, deleted, rid, chnged, coalesce(origname!=pathname,0), islink"
       "  FROM vfile %s"
       " ORDER BY %s", blob_sql_text(&where), zOrderBy /*safe-for-%s*/
    );
  }
  blob_reset(&where);
  while( db_step(&q)==SQLITE_ROW ){
    const char *zPathname = db_column_text(&q,0);
    int isDeleted = db_column_int(&q, 1);
    int isNew = db_column_int(&q,2)==0;
    int chnged = db_column_int(&q,3);
    int renamed = db_column_int(&q,4);
    int isLink = db_column_int(&q,5);
    char *zFullName = mprintf("%s%s", g.zLocalRoot, zPathname);
    const char *type = "";
    if( verboseFlag ){
      if( isNew ){
        type = "ADDED      ";
      }else if( isDeleted ){
        type = "DELETED    ";
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
          type = "UPDATED_BY_MERGE ";
        }else if( chnged==3 ){
          type = "ADDED_BY_MERGE ";
        }else if( chnged==4 ){
          type = "UPDATED_BY_INTEGRATE ";
        }else if( chnged==5 ){
          type = "ADDED_BY_INTEGRATE ";
        }else if( file_contains_merge_marker(zFullName) ){
          type = "CONFLICT   ";
        }else{
          type = "EDITED     ";
        }
      }else if( renamed ){
        type = "RENAMED    ";
      }else{







|







468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
          type = "UPDATED_BY_MERGE ";
        }else if( chnged==3 ){
          type = "ADDED_BY_MERGE ";
        }else if( chnged==4 ){
          type = "UPDATED_BY_INTEGRATE ";
        }else if( chnged==5 ){
          type = "ADDED_BY_INTEGRATE ";
        }else if( !isLink && file_contains_merge_marker(zFullName) ){
          type = "CONFLICT   ";
        }else{
          type = "EDITED     ";
        }
      }else if( renamed ){
        type = "RENAMED    ";
      }else{
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
** the --force flag is used or unless the file matches glob pattern
** specified by the --ignore or --keep will ever be deleted. The
** default values for --ignore, and --keep are determined by the
** (versionable) clean-glob, ignore-glob, and keep-glob settings.
** Files and subdirectories whose names begin with "." are automatically
** ignored unless the --dotfiles option is used.
**
** The --verily option ignores the ignore-glob setting and turns on
**--dotfiles, and --emptydirs.  Use the --verily option when you
** really want to clean up everything.  Extreme care should be
** exercised when using the --verily option.
**
** Options:
**    --allckouts      Check for empty directories within any checkouts
**                     that may be nested within the current one.  This
**                     option should be used with great care because the
**                     empty-dirs setting (and other applicable settings)
**                     belonging to the other repositories, if any, will







|
|
|
|







640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
** the --force flag is used or unless the file matches glob pattern
** specified by the --ignore or --keep will ever be deleted. The
** default values for --ignore, and --keep are determined by the
** (versionable) clean-glob, ignore-glob, and keep-glob settings.
** Files and subdirectories whose names begin with "." are automatically
** ignored unless the --dotfiles option is used.
**
** The --verily option ignores the keep-glob and ignore-glob settings
** and turns on --dotfiles, and --emptydirs.  Use the --verily
** option when you really want to clean up everything.  Extreme care
** should be exercised when using the --verily option.
**
** Options:
**    --allckouts      Check for empty directories within any checkouts
**                     that may be nested within the current one.  This
**                     option should be used with great care because the
**                     empty-dirs setting (and other applicable settings)
**                     belonging to the other repositories, if any, will
666
667
668
669
670
671
672
673
674


675
676
677
678
679
680
681
682
**                     explicitly exempted via the empty-dirs setting
**                     or another applicable setting or command line
**                     argument.  Matching files, if any, are removed
**                     prior to checking for any empty directories;
**                     therefore, directories that contain only files
**                     that were removed will be removed as well.
**    -f|--force       Remove files without prompting.
**    -x|--verily      Remove everything that is not a managed file or
**                     the repository itself.  Implies --emptydirs and


**                     --dotfiles.  Disregards ignore-glob setting.
**                     Compatibile with "git clean -x".
**    --ignore <CSG>   Ignore files matching patterns from the
**                     comma separated list of glob patterns.
**    --keep <CSG>     Keep files matching this comma separated
**                     list of glob patterns.
**    -n|--dry-run     Delete nothing, but display what would have been
**                     deleted.







|
|
>
>
|







668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
**                     explicitly exempted via the empty-dirs setting
**                     or another applicable setting or command line
**                     argument.  Matching files, if any, are removed
**                     prior to checking for any empty directories;
**                     therefore, directories that contain only files
**                     that were removed will be removed as well.
**    -f|--force       Remove files without prompting.
**    -x|--verily      WARNING: Removes everything that is not a managed
**                     file or the repository itself.  This option
**                     implies the --emptydirs and --dotfiles options.
**                     --disable-undo options.  Furthermore, it completely
**                     disregards ignore-glob settings.
**                     Compatibile with "git clean -x".
**    --ignore <CSG>   Ignore files matching patterns from the
**                     comma separated list of glob patterns.
**    --keep <CSG>     Keep files matching this comma separated
**                     list of glob patterns.
**    -n|--dry-run     Delete nothing, but display what would have been
**                     deleted.
720
721
722
723
724
725
726

727
728
729
730
731
732
733
734
735
736
737
738
739
740
  zIgnoreFlag = find_option("ignore",0,1);
  verboseFlag = find_option("verbose","v",0)!=0;
  zKeepFlag = find_option("keep",0,1);
  db_must_be_within_tree();
  if( find_option("verily","x",0)!=0 ){
    verilyFlag = 1;
    emptyDirsFlag = 1;

    scanFlags |= SCAN_ALL;
    zCleanFlag = 0;
  }
  if( zIgnoreFlag==0 ){
    zIgnoreFlag = db_get("ignore-glob", 0);
  }
  if( zKeepFlag==0 ){
    zKeepFlag = db_get("keep-glob", 0);
  }
  if( db_get_boolean("dotfiles", 0) ) scanFlags |= SCAN_ALL;
  verify_all_options();
  pIgnore = glob_create(zIgnoreFlag);
  pKeep = glob_create(zKeepFlag);
  nRoot = (int)strlen(g.zLocalRoot);







>






|







724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
  zIgnoreFlag = find_option("ignore",0,1);
  verboseFlag = find_option("verbose","v",0)!=0;
  zKeepFlag = find_option("keep",0,1);
  db_must_be_within_tree();
  if( find_option("verily","x",0)!=0 ){
    verilyFlag = 1;
    emptyDirsFlag = 1;
    disableUndo = 1;
    scanFlags |= SCAN_ALL;
    zCleanFlag = 0;
  }
  if( zIgnoreFlag==0 ){
    zIgnoreFlag = db_get("ignore-glob", 0);
  }
  if( zKeepFlag==0 && !verilyFlag ){
    zKeepFlag = db_get("keep-glob", 0);
  }
  if( db_get_boolean("dotfiles", 0) ) scanFlags |= SCAN_ALL;
  verify_all_options();
  pIgnore = glob_create(zIgnoreFlag);
  pKeep = glob_create(zKeepFlag);
  nRoot = (int)strlen(g.zLocalRoot);
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
    if( file_tree_name(g.zRepositoryName, &repo, 0, 0) ){
      db_multi_exec("DELETE FROM sfile WHERE x=%B", &repo);
    }
    db_multi_exec("DELETE FROM sfile WHERE x IN (SELECT pathname FROM vfile)");
    while( db_step(&q)==SQLITE_ROW ){
      const char *zName = db_column_text(&q, 0);
      if( glob_match(pKeep, zName+nRoot) ){
        if( verboseFlag || verilyFlag ){
          fossil_print("KEPT file \"%s\" not removed (due to --keep"
                       " or \"keep-glob\")\n", zName+nRoot);
        }
        continue;
      }
      if( !dryRunFlag
          && !(verilyFlag && glob_match(pIgnore, zName+nRoot)) ){







|







759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
    if( file_tree_name(g.zRepositoryName, &repo, 0, 0) ){
      db_multi_exec("DELETE FROM sfile WHERE x=%B", &repo);
    }
    db_multi_exec("DELETE FROM sfile WHERE x IN (SELECT pathname FROM vfile)");
    while( db_step(&q)==SQLITE_ROW ){
      const char *zName = db_column_text(&q, 0);
      if( glob_match(pKeep, zName+nRoot) ){
        if( verboseFlag ){
          fossil_print("KEPT file \"%s\" not removed (due to --keep"
                       " or \"keep-glob\")\n", zName+nRoot);
        }
        continue;
      }
      if( !dryRunFlag
          && !(verilyFlag && glob_match(pIgnore, zName+nRoot)) ){
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
        " WHERE x NOT IN (%s) AND y = 0"
        " ORDER BY 1 DESC",
        g.zLocalRoot, fossil_all_reserved_names(0)
    );
    while( db_step(&q)==SQLITE_ROW ){
      const char *zName = db_column_text(&q, 0);
      if( glob_match(pKeep, zName+nRoot) ){
        if( verboseFlag || verilyFlag ){
          fossil_print("KEPT directory \"%s\" not removed (due to --keep"
                       " or \"keep-glob\")\n", zName+nRoot);
        }
        continue;
      }
      if( dryRunFlag || file_rmdir(zName)==0 ){
        if( verboseFlag || dryRunFlag ){







|







823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
        " WHERE x NOT IN (%s) AND y = 0"
        " ORDER BY 1 DESC",
        g.zLocalRoot, fossil_all_reserved_names(0)
    );
    while( db_step(&q)==SQLITE_ROW ){
      const char *zName = db_column_text(&q, 0);
      if( glob_match(pKeep, zName+nRoot) ){
        if( verboseFlag ){
          fossil_print("KEPT directory \"%s\" not removed (due to --keep"
                       " or \"keep-glob\")\n", zName+nRoot);
        }
        continue;
      }
      if( dryRunFlag || file_rmdir(zName)==0 ){
        if( verboseFlag || dryRunFlag ){
1616
1617
1618
1619
1620
1621
1622


1623
1624
1625
1626
1627
1628
1629
**    -n|--dry-run               If given, display instead of run actions
**    --no-warnings              omit all warnings about file contents
**    --nosign                   do not attempt to sign this commit with gpg
**    --private                  do not sync changes and their descendants
**    --sha1sum                  verify file status using SHA1 hashing rather
**                               than relying on file mtimes
**    --tag TAG-NAME             assign given tag TAG-NAME to the check-in


**
** See also: branch, changes, checkout, extras, sync
*/
void commit_cmd(void){
  int hasChanges;        /* True if unsaved changes exist */
  int vid;               /* blob-id of parent version */
  int nrid;              /* blob-id of a modified file */







>
>







1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
**    -n|--dry-run               If given, display instead of run actions
**    --no-warnings              omit all warnings about file contents
**    --nosign                   do not attempt to sign this commit with gpg
**    --private                  do not sync changes and their descendants
**    --sha1sum                  verify file status using SHA1 hashing rather
**                               than relying on file mtimes
**    --tag TAG-NAME             assign given tag TAG-NAME to the check-in
**    --date-override DATE       DATE to use instead of 'now'
**    --user-override USER       USER to use instead of the current default
**
** See also: branch, changes, checkout, extras, sync
*/
void commit_cmd(void){
  int hasChanges;        /* True if unsaved changes exist */
  int vid;               /* blob-id of parent version */
  int nrid;              /* blob-id of a modified file */
Changes to src/db.c.
1972
1973
1974
1975
1976
1977
1978
1979

1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
  ){
    /* There's a versioned setting, and a non-versioned setting. Tell
    ** the user about the conflict */
    fossil_warning(
        "setting %s has both versioned and non-versioned values: using "
        "versioned value from file .fossil-settings/%s (to silence this "
        "warning, either create an empty file named "
        ".fossil-settings/%s.no-warn or delete the non-versioned setting "

        " with \"fossil unset %s\")", zName, zName, zName, zName
    );
  }
  /* Prefer the versioned setting */
  return ( zVersionedSetting!=0 ) ? zVersionedSetting : zNonVersionedSetting;
}


/*
** Get and set values from the CONFIG, GLOBAL_CONFIG and VVAR table in the
** repository and local databases.
**
** If no such variable exists, return zDefault.  Or, if zName is the name
** of a setting, then the zDefault is ignored and the default value of the
** setting is returned instead.  If zName is a versioned setting, then
** versioned value takes priority.
*/
char *db_get(const char *zName, char *zDefault){
  char *z = 0;
  const Setting *pSetting = db_find_setting(zName, 0);
  if( g.repositoryOpen ){
    z = db_text(0, "SELECT value FROM config WHERE name=%Q", zName);
  }
  if( z==0 && g.zConfigDbName ){
    db_swap_connections();
    z = db_text(0, "SELECT value FROM global_config WHERE name=%Q", zName);
    db_swap_connections();
  }
  if( pSetting!=0 && pSetting->versionable ){
    /* This is a versionable setting, try and get the info from a
    ** checked out file */
    z = db_get_versioned(zName, z);
  }
  if( z==0 ){
    if( zDefault==0 && pSetting && pSetting->def[0] ){
      z = fossil_strdup(pSetting->def);
    }else{
      z = zDefault;
    }
  }
  return z;
}
char *db_get_mtime(const char *zName, char *zFormat, char *zDefault){
  char *z = 0;
  if( g.repositoryOpen ){
    z = db_text(0, "SELECT mtime FROM config WHERE name=%Q", zName);
  }
  if( z==0 ){
    z = zDefault;
  }else if( zFormat!=0 ){
    z = db_text(0, "SELECT strftime(%Q,%Q,'unixepoch');", zFormat, z);
  }
  return z;
}
void db_set(const char *zName, const char *zValue, int globalFlag){
  db_begin_transaction();







|
>
|
















|



















|




|





|







1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
  ){
    /* There's a versioned setting, and a non-versioned setting. Tell
    ** the user about the conflict */
    fossil_warning(
        "setting %s has both versioned and non-versioned values: using "
        "versioned value from file .fossil-settings/%s (to silence this "
        "warning, either create an empty file named "
        ".fossil-settings/%s.no-warn in the check-out root, "
        "or delete the non-versioned setting "
        "with \"fossil unset %s\")", zName, zName, zName, zName
    );
  }
  /* Prefer the versioned setting */
  return ( zVersionedSetting!=0 ) ? zVersionedSetting : zNonVersionedSetting;
}


/*
** Get and set values from the CONFIG, GLOBAL_CONFIG and VVAR table in the
** repository and local databases.
**
** If no such variable exists, return zDefault.  Or, if zName is the name
** of a setting, then the zDefault is ignored and the default value of the
** setting is returned instead.  If zName is a versioned setting, then
** versioned value takes priority.
*/
char *db_get(const char *zName, const char *zDefault){
  char *z = 0;
  const Setting *pSetting = db_find_setting(zName, 0);
  if( g.repositoryOpen ){
    z = db_text(0, "SELECT value FROM config WHERE name=%Q", zName);
  }
  if( z==0 && g.zConfigDbName ){
    db_swap_connections();
    z = db_text(0, "SELECT value FROM global_config WHERE name=%Q", zName);
    db_swap_connections();
  }
  if( pSetting!=0 && pSetting->versionable ){
    /* This is a versionable setting, try and get the info from a
    ** checked out file */
    z = db_get_versioned(zName, z);
  }
  if( z==0 ){
    if( zDefault==0 && pSetting && pSetting->def[0] ){
      z = fossil_strdup(pSetting->def);
    }else{
      z = fossil_strdup(zDefault);
    }
  }
  return z;
}
char *db_get_mtime(const char *zName, const char *zFormat, const char *zDefault){
  char *z = 0;
  if( g.repositoryOpen ){
    z = db_text(0, "SELECT mtime FROM config WHERE name=%Q", zName);
  }
  if( z==0 ){
    z = fossil_strdup(zDefault);
  }else if( zFormat!=0 ){
    z = db_text(0, "SELECT strftime(%Q,%Q,'unixepoch');", zFormat, z);
  }
  return z;
}
void db_set(const char *zName, const char *zValue, int globalFlag){
  db_begin_transaction();
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
}
int db_get_boolean(const char *zName, int dflt){
  char *zVal = db_get(zName, dflt ? "on" : "off");
  if( is_truth(zVal) ) return 1;
  if( is_false(zVal) ) return 0;
  return dflt;
}
char *db_lget(const char *zName, char *zDefault){
  return db_text((char*)zDefault,
                 "SELECT value FROM vvar WHERE name=%Q", zName);
}
void db_lset(const char *zName, const char *zValue){
  db_multi_exec("REPLACE INTO vvar(name,value) VALUES(%Q,%Q)", zName, zValue);
}
int db_lget_int(const char *zName, int dflt){
  return db_int(dflt, "SELECT value FROM vvar WHERE name=%Q", zName);







|
|







2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
}
int db_get_boolean(const char *zName, int dflt){
  char *zVal = db_get(zName, dflt ? "on" : "off");
  if( is_truth(zVal) ) return 1;
  if( is_false(zVal) ) return 0;
  return dflt;
}
char *db_lget(const char *zName, const char *zDefault){
  return db_text(zDefault,
                 "SELECT value FROM vvar WHERE name=%Q", zName);
}
void db_lset(const char *zName, const char *zValue){
  db_multi_exec("REPLACE INTO vvar(name,value) VALUES(%Q,%Q)", zName, zValue);
}
int db_lget_int(const char *zName, int dflt){
  return db_int(dflt, "SELECT value FROM vvar WHERE name=%Q", zName);
2376
2377
2378
2379
2380
2381
2382





2383
2384
2385
2386
2387
2388
2389
  { "diff-binary",      0,              0, 0, 0, "on"                  },
  { "diff-command",     0,             40, 0, 0, ""                    },
  { "dont-push",        0,              0, 0, 0, "off"                 },
  { "dotfiles",         0,              0, 1, 0, "off"                 },
  { "editor",           0,             32, 0, 0, ""                    },
  { "empty-dirs",       0,             40, 1, 0, ""                    },
  { "encoding-glob",    0,             40, 1, 0, ""                    },





  { "gdiff-command",    0,             40, 0, 0, "gdiff"               },
  { "gmerge-command",   0,             40, 0, 0, ""                    },
  { "hash-digits",      0,              5, 0, 0, "10"                  },
  { "http-port",        0,             16, 0, 0, "8080"                },
  { "https-login",      0,              0, 0, 0, "off"                 },
  { "ignore-glob",      0,             40, 1, 0, ""                    },
  { "keep-glob",        0,             40, 1, 0, ""                    },







>
>
>
>
>







2377
2378
2379
2380
2381
2382
2383
2384
2385
2386
2387
2388
2389
2390
2391
2392
2393
2394
2395
  { "diff-binary",      0,              0, 0, 0, "on"                  },
  { "diff-command",     0,             40, 0, 0, ""                    },
  { "dont-push",        0,              0, 0, 0, "off"                 },
  { "dotfiles",         0,              0, 1, 0, "off"                 },
  { "editor",           0,             32, 0, 0, ""                    },
  { "empty-dirs",       0,             40, 1, 0, ""                    },
  { "encoding-glob",    0,             40, 1, 0, ""                    },
#if defined(FOSSIL_ENABLE_EXEC_REL_PATHS)
  { "exec-rel-paths",   0,              0, 0, 0, "on"                  },
#else
  { "exec-rel-paths",   0,              0, 0, 0, "off"                 },
#endif
  { "gdiff-command",    0,             40, 0, 0, "gdiff"               },
  { "gmerge-command",   0,             40, 0, 0, ""                    },
  { "hash-digits",      0,              5, 0, 0, "10"                  },
  { "http-port",        0,             16, 0, 0, "8080"                },
  { "https-login",      0,              0, 0, 0, "off"                 },
  { "ignore-glob",      0,             40, 1, 0, ""                    },
  { "keep-glob",        0,             40, 1, 0, ""                    },
2459
2460
2461
2462
2463
2464
2465
2466
2467
2468
2469
2470
2471
2472
2473
** %fossil unset PROPERTY ?OPTIONS?
**
** The "settings" command with no arguments lists all properties and their
** values.  With just a property name it shows the value of that property.
** With a value argument it changes the property for the current repository.
**
** Settings marked as versionable are overridden by the contents of the
** file named .fossil-settings/PROPERTY in the checked out files, if that
** file exists.
**
** The "unset" command clears a property setting.
**
**
**    access-log       If enabled, record successful and failed login attempts
**                     in the "accesslog" table.  Default: off







|







2465
2466
2467
2468
2469
2470
2471
2472
2473
2474
2475
2476
2477
2478
2479
** %fossil unset PROPERTY ?OPTIONS?
**
** The "settings" command with no arguments lists all properties and their
** values.  With just a property name it shows the value of that property.
** With a value argument it changes the property for the current repository.
**
** Settings marked as versionable are overridden by the contents of the
** file named .fossil-settings/PROPERTY in the check-out root, if that
** file exists.
**
** The "unset" command clears a property setting.
**
**
**    access-log       If enabled, record successful and failed login attempts
**                     in the "accesslog" table.  Default: off
2546
2547
2548
2549
2550
2551
2552



2553
2554
2555
2556
2557
2558
2559
**                     created.
**
**    encoding-glob    The VALUE is a comma or newline-separated list of GLOB
**     (versionable)   patterns specifying files that the "commit" command will
**                     ignore when issuing warnings about text files that may
**                     use another encoding than ASCII or UTF-8. Set to "*"
**                     to disable encoding checking.



**
**    gdiff-command    External command to run when performing a graphical
**                     diff. If undefined, text diff will be used.
**
**    gmerge-command   A graphical merge conflict resolver command operating
**                     on four files.
**                     Ex: kdiff3 "%baseline" "%original" "%merge" -o "%output"







>
>
>







2552
2553
2554
2555
2556
2557
2558
2559
2560
2561
2562
2563
2564
2565
2566
2567
2568
**                     created.
**
**    encoding-glob    The VALUE is a comma or newline-separated list of GLOB
**     (versionable)   patterns specifying files that the "commit" command will
**                     ignore when issuing warnings about text files that may
**                     use another encoding than ASCII or UTF-8. Set to "*"
**                     to disable encoding checking.
**
**    exec-rel-paths   When executing certain external commands (e.g. diff and
**                     gdiff), use relative paths.
**
**    gdiff-command    External command to run when performing a graphical
**                     diff. If undefined, text diff will be used.
**
**    gmerge-command   A graphical merge conflict resolver command operating
**                     on four files.
**                     Ex: kdiff3 "%baseline" "%original" "%merge" -o "%output"
Changes to src/delta.c.
587
588
589
590
591
592
593





























































594
595
596
597
598
599
600
#endif
        if( total!=limit ){
          /* ERROR: generated size does not match predicted size */
          return -1;
        }
        return total;
      }





























































      default: {
        /* ERROR: unknown delta operator */
        return -1;
      }
    }
  }
  /* ERROR: unterminated delta */







>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>







587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
#endif
        if( total!=limit ){
          /* ERROR: generated size does not match predicted size */
          return -1;
        }
        return total;
      }
      default: {
        /* ERROR: unknown delta operator */
        return -1;
      }
    }
  }
  /* ERROR: unterminated delta */
  return -1;
}

/*
** Analyze a delta.  Figure out the total number of bytes copied from
** source to target, and the total number of bytes inserted by the delta,
** and return both numbers.
*/
int delta_analyze(
  const char *zDelta,    /* Delta to apply to the pattern */
  int lenDelta,          /* Length of the delta */
  int *pnCopy,           /* OUT: Number of bytes copied */
  int *pnInsert          /* OUT: Number of bytes inserted */
){
  unsigned int nInsert = 0;
  unsigned int nCopy = 0;

  (void)getInt(&zDelta, &lenDelta);
  if( *zDelta!='\n' ){
    /* ERROR: size integer not terminated by "\n" */
    return -1;
  }
  zDelta++; lenDelta--;
  while( *zDelta && lenDelta>0 ){
    unsigned int cnt;
    cnt = getInt(&zDelta, &lenDelta);
    switch( zDelta[0] ){
      case '@': {
        zDelta++; lenDelta--;
        (void)getInt(&zDelta, &lenDelta);
        if( lenDelta>0 && zDelta[0]!=',' ){
          /* ERROR: copy command not terminated by ',' */
          return -1;
        }
        zDelta++; lenDelta--;
        nCopy += cnt;
        break;
      }
      case ':': {
        zDelta++; lenDelta--;
        nInsert += cnt;
        if( cnt>lenDelta ){
          /* ERROR: insert count exceeds size of delta */
          return -1;
        }
        zDelta += cnt;
        lenDelta -= cnt;
        break;
      }
      case ';': {
        *pnCopy = nCopy;
        *pnInsert = nInsert;
        return 0;
      }
      default: {
        /* ERROR: unknown delta operator */
        return -1;
      }
    }
  }
  /* ERROR: unterminated delta */
Changes to src/deltacmd.c.
41
42
43
44
45
46
47


48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69





































70
71
72
73
74
75
76
  blob_resize(pDelta, len);
  return 0;
}

/*
** COMMAND:  test-delta-create
**


** Given two input files, create and output a delta that carries
** the first file into the second.
*/
void delta_create_cmd(void){
  Blob orig, target, delta;
  if( g.argc!=5 ){
    usage("ORIGIN TARGET DELTA");
  }
  if( blob_read_from_file(&orig, g.argv[2])<0 ){
    fossil_fatal("cannot read %s\n", g.argv[2]);
  }
  if( blob_read_from_file(&target, g.argv[3])<0 ){
    fossil_fatal("cannot read %s\n", g.argv[3]);
  }
  blob_delta_create(&orig, &target, &delta);
  if( blob_write_to_file(&delta, g.argv[4])<blob_size(&delta) ){
    fossil_fatal("cannot write %s\n", g.argv[4]);
  }
  blob_reset(&orig);
  blob_reset(&target);
  blob_reset(&delta);
}






































/*
** Apply the delta in pDelta to the original file pOriginal to generate
** the target file pTarget.  The pTarget blob is initialized by this
** routine.
**
** It works ok for pTarget and pOriginal to be the same blob.







>
>
|
|




















>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>







41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
  blob_resize(pDelta, len);
  return 0;
}

/*
** COMMAND:  test-delta-create
**
** Usage: %fossil test-delta-create FILE1 FILE2 DELTA
**
** Create and output a delta that carries FILE1 into FILE2.
** Store the result in DELTA.
*/
void delta_create_cmd(void){
  Blob orig, target, delta;
  if( g.argc!=5 ){
    usage("ORIGIN TARGET DELTA");
  }
  if( blob_read_from_file(&orig, g.argv[2])<0 ){
    fossil_fatal("cannot read %s\n", g.argv[2]);
  }
  if( blob_read_from_file(&target, g.argv[3])<0 ){
    fossil_fatal("cannot read %s\n", g.argv[3]);
  }
  blob_delta_create(&orig, &target, &delta);
  if( blob_write_to_file(&delta, g.argv[4])<blob_size(&delta) ){
    fossil_fatal("cannot write %s\n", g.argv[4]);
  }
  blob_reset(&orig);
  blob_reset(&target);
  blob_reset(&delta);
}

/*
** COMMAND:  test-delta-analyze
**
** Usage: %fossil test-delta-analyze FILE1 FILE2
**
** Create and a delta that carries FILE1 into FILE2.  Print the
** number bytes copied and the number of bytes inserted.
*/
void delta_analyze_cmd(void){
  Blob orig, target, delta;
  int nCopy = 0;
  int nInsert = 0;
  int sz1, sz2;
  if( g.argc!=4 ){
    usage("ORIGIN TARGET");
  }
  if( blob_read_from_file(&orig, g.argv[2])<0 ){
    fossil_fatal("cannot read %s\n", g.argv[2]);
  }
  if( blob_read_from_file(&target, g.argv[3])<0 ){
    fossil_fatal("cannot read %s\n", g.argv[3]);
  }
  blob_delta_create(&orig, &target, &delta);
  delta_analyze(blob_buffer(&delta), blob_size(&delta), &nCopy, &nInsert);
  sz1 = blob_size(&orig);
  sz2 = blob_size(&target);
  blob_reset(&orig);
  blob_reset(&target);
  blob_reset(&delta);
  fossil_print("original size:  %8d\n", sz1);
  fossil_print("bytes copied:   %8d (%.1f%% of target)\n",
               nCopy, (100.0*nCopy)/sz2);
  fossil_print("bytes inserted: %8d (%.1f%% of target)\n",
               nInsert, (100.0*nInsert)/sz2);
  fossil_print("final size:     %8d\n", sz2);
}

/*
** Apply the delta in pDelta to the original file pOriginal to generate
** the target file pTarget.  The pTarget blob is initialized by this
** routine.
**
** It works ok for pTarget and pOriginal to be the same blob.
100
101
102
103
104
105
106
107

108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128

129
130
131


132
133
134
135
136
137
138
  *pTarget = out;
  return len;
}

/*
** COMMAND:  test-delta-apply
**
** Given an input files and a delta, apply the delta to the input file

** and write the result.
*/
void delta_apply_cmd(void){
  Blob orig, target, delta;
  if( g.argc!=5 ){
    usage("ORIGIN DELTA TARGET");
  }
  if( blob_read_from_file(&orig, g.argv[2])<0 ){
    fossil_fatal("cannot read %s\n", g.argv[2]);
  }
  if( blob_read_from_file(&delta, g.argv[3])<0 ){
    fossil_fatal("cannot read %s\n", g.argv[3]);
  }
  blob_delta_apply(&orig, &delta, &target);
  if( blob_write_to_file(&target, g.argv[4])<blob_size(&target) ){
    fossil_fatal("cannot write %s\n", g.argv[4]);
  }
  blob_reset(&orig);
  blob_reset(&target);
  blob_reset(&delta);
}


/*
** COMMAND:  test-delta


**
** Read two files named on the command-line.  Create and apply deltas
** going in both directions.  Verify that the original files are
** correctly recovered.
*/
void cmd_test_delta(void){
  Blob f1, f2;     /* Original file content */







|
>
|




















>



>
>







139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
  *pTarget = out;
  return len;
}

/*
** COMMAND:  test-delta-apply
**
** Usage: %fossil test-delta-apply FILE1 DELTA
**
** Apply DELTA to FILE1 and output the result.
*/
void delta_apply_cmd(void){
  Blob orig, target, delta;
  if( g.argc!=5 ){
    usage("ORIGIN DELTA TARGET");
  }
  if( blob_read_from_file(&orig, g.argv[2])<0 ){
    fossil_fatal("cannot read %s\n", g.argv[2]);
  }
  if( blob_read_from_file(&delta, g.argv[3])<0 ){
    fossil_fatal("cannot read %s\n", g.argv[3]);
  }
  blob_delta_apply(&orig, &delta, &target);
  if( blob_write_to_file(&target, g.argv[4])<blob_size(&target) ){
    fossil_fatal("cannot write %s\n", g.argv[4]);
  }
  blob_reset(&orig);
  blob_reset(&target);
  blob_reset(&delta);
}


/*
** COMMAND:  test-delta
**
** Usage: %fossil test-delta FILE1 FILE2
**
** Read two files named on the command-line.  Create and apply deltas
** going in both directions.  Verify that the original files are
** correctly recovered.
*/
void cmd_test_delta(void){
  Blob f1, f2;     /* Original file content */
Changes to src/diffcmd.c.
30
31
32
33
34
35
36






















37
38
39
40
41
42
43
#  define NULL_DEVICE "/dev/null"
#endif

/*
** Used when the name for the diff is unknown.
*/
#define DIFF_NO_NAME  "(unknown)"























/*
** Print the "Index:" message that patches wants to see at the top of a diff.
*/
void diff_print_index(const char *zFile, u64 diffFlags){
  if( (diffFlags & (DIFF_SIDEBYSIDE|DIFF_BRIEF))==0 ){
    char *z = mprintf("Index: %s\n%.66c\n", zFile, '=');







>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>







30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
#  define NULL_DEVICE "/dev/null"
#endif

/*
** Used when the name for the diff is unknown.
*/
#define DIFF_NO_NAME  "(unknown)"

/*
** Use the "exec-rel-paths" setting and the --exec-abs-paths and
** --exec-rel-paths command line options to determine whether
** certain external commands are executed using relative paths.
*/
static int determine_exec_relative_option(int force)
{
  static int relativePaths = -1;
  if( force || relativePaths==-1 ){
    int relPathOption = find_option("exec-rel-paths", 0, 0)!=0;
    int absPathOption = find_option("exec-abs-paths", 0, 0)!=0;
#if defined(FOSSIL_ENABLE_EXEC_REL_PATHS)
    relativePaths = db_get_boolean("exec-rel-paths", 1);
#else
    relativePaths = db_get_boolean("exec-rel-paths", 0);
#endif
    if( relPathOption ){ relativePaths = 1; }
    if( absPathOption ){ relativePaths = 0; }
  }
  return relativePaths;
}

/*
** Print the "Index:" message that patches wants to see at the top of a diff.
*/
void diff_print_index(const char *zFile, u64 diffFlags){
  if( (diffFlags & (DIFF_SIDEBYSIDE|DIFF_BRIEF))==0 ){
    char *z = mprintf("Index: %s\n%.66c\n", zFile, '=');
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
    file_delete(zTemp1);
    file_delete(zTemp2);
    blob_reset(&cmd);
  }
}

/*
** Do a diff against a single file named in zFileTreeName from version zFrom
** against the same file on disk.
**
** Use the internal diff logic if zDiffCmd is NULL.  Otherwise call the
** command zDiffCmd to do the diffing.
**
** When using an external diff program, zBinGlob contains the GLOB patterns
** for file names to treat as binary.  If fIncludeBinary is zero, these files
** will be skipped in addition to files that may contain binary content.
*/
static void diff_one_against_disk(
  const char *zFrom,        /* Name of file */
  const char *zDiffCmd,     /* Use this "diff" command */
  const char *zBinGlob,     /* Treat file names matching this as binary */
  int fIncludeBinary,       /* Include binary files for external diff */
  u64 diffFlags,            /* Diff control flags */
  const char *zFileTreeName
){
  Blob fname;
  Blob content;
  int isLink;
  int isBin;
  file_tree_name(zFileTreeName, &fname, 0, 1);
  historical_version_of_file(zFrom, blob_str(&fname), &content, &isLink, 0,
                             fIncludeBinary ? 0 : &isBin, 0);
  if( !isLink != !file_wd_islink(zFrom) ){
    fossil_print("%s",DIFF_CANNOT_COMPUTE_SYMLINK);
  }else{
    diff_file(&content, isBin, zFileTreeName, zFileTreeName,
              zDiffCmd, zBinGlob, fIncludeBinary, diffFlags);
  }
  blob_reset(&content);
  blob_reset(&fname);
}

/*







|










|




|





|





|







296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
    file_delete(zTemp1);
    file_delete(zTemp2);
    blob_reset(&cmd);
  }
}

/*
** Do a diff against a single file named in zFile from version zFrom
** against the same file on disk.
**
** Use the internal diff logic if zDiffCmd is NULL.  Otherwise call the
** command zDiffCmd to do the diffing.
**
** When using an external diff program, zBinGlob contains the GLOB patterns
** for file names to treat as binary.  If fIncludeBinary is zero, these files
** will be skipped in addition to files that may contain binary content.
*/
static void diff_one_against_disk(
  const char *zFrom,        /* Version tag for the "before" file */
  const char *zDiffCmd,     /* Use this "diff" command */
  const char *zBinGlob,     /* Treat file names matching this as binary */
  int fIncludeBinary,       /* Include binary files for external diff */
  u64 diffFlags,            /* Diff control flags */
  const char *zFile         /* Name of the file to be diffed */
){
  Blob fname;
  Blob content;
  int isLink;
  int isBin;
  file_tree_name(zFile, &fname, 0, 1);
  historical_version_of_file(zFrom, blob_str(&fname), &content, &isLink, 0,
                             fIncludeBinary ? 0 : &isBin, 0);
  if( !isLink != !file_wd_islink(zFrom) ){
    fossil_print("%s",DIFF_CANNOT_COMPUTE_SYMLINK);
  }else{
    diff_file(&content, isBin, zFile, zFile,
              zDiffCmd, zBinGlob, fIncludeBinary, diffFlags);
  }
  blob_reset(&content);
  blob_reset(&fname);
}

/*
382
383
384
385
386
387
388
389
390
391










392
393
394
395
396
397
398
399
400
401
402
403
404




405
406
407
408
409
410
411
  while( db_step(&q)==SQLITE_ROW ){
    const char *zPathname = db_column_text(&q,0);
    int isDeleted = db_column_int(&q, 1);
    int isChnged = db_column_int(&q,2);
    int isNew = db_column_int(&q,3);
    int srcid = db_column_int(&q, 4);
    int isLink = db_column_int(&q, 5);
    char *zToFree = mprintf("%s%s", g.zLocalRoot, zPathname);
    const char *zFullName = zToFree;
    int showDiff = 1;










    if( isDeleted ){
      fossil_print("DELETED  %s\n", zPathname);
      if( !asNewFile ){ showDiff = 0; zFullName = NULL_DEVICE; }
    }else if( file_access(zFullName, F_OK) ){
      fossil_print("MISSING  %s\n", zPathname);
      if( !asNewFile ){ showDiff = 0; }
    }else if( isNew ){
      fossil_print("ADDED    %s\n", zPathname);
      srcid = 0;
      if( !asNewFile ){ showDiff = 0; }
    }else if( isChnged==3 ){
      fossil_print("ADDED_BY_MERGE %s\n", zPathname);
      srcid = 0;




      if( !asNewFile ){ showDiff = 0; }
    }
    if( showDiff ){
      Blob content;
      int isBin;
      if( !isLink != !file_wd_islink(zFullName) ){
        diff_print_index(zPathname, diffFlags);







<
|

>
>
>
>
>
>
>
>
>
>













>
>
>
>







404
405
406
407
408
409
410

411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
  while( db_step(&q)==SQLITE_ROW ){
    const char *zPathname = db_column_text(&q,0);
    int isDeleted = db_column_int(&q, 1);
    int isChnged = db_column_int(&q,2);
    int isNew = db_column_int(&q,3);
    int srcid = db_column_int(&q, 4);
    int isLink = db_column_int(&q, 5);

    const char *zFullName;
    int showDiff = 1;
    Blob fname;

    if( determine_exec_relative_option(0) ){
      blob_zero(&fname);
      file_relative_name(zPathname, &fname, 1);
    }else{
      blob_set(&fname, g.zLocalRoot);
      blob_append(&fname, zPathname, -1);
    }
    zFullName = blob_str(&fname);
    if( isDeleted ){
      fossil_print("DELETED  %s\n", zPathname);
      if( !asNewFile ){ showDiff = 0; zFullName = NULL_DEVICE; }
    }else if( file_access(zFullName, F_OK) ){
      fossil_print("MISSING  %s\n", zPathname);
      if( !asNewFile ){ showDiff = 0; }
    }else if( isNew ){
      fossil_print("ADDED    %s\n", zPathname);
      srcid = 0;
      if( !asNewFile ){ showDiff = 0; }
    }else if( isChnged==3 ){
      fossil_print("ADDED_BY_MERGE %s\n", zPathname);
      srcid = 0;
      if( !asNewFile ){ showDiff = 0; }
    }else if( isChnged==5 ){
      fossil_print("ADDED_BY_INTEGRATE %s\n", zPathname);
      srcid = 0;
      if( !asNewFile ){ showDiff = 0; }
    }
    if( showDiff ){
      Blob content;
      int isBin;
      if( !isLink != !file_wd_islink(zFullName) ){
        diff_print_index(zPathname, diffFlags);
420
421
422
423
424
425
426
427
428
429
430
431

































































432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
      }
      isBin = fIncludeBinary ? 0 : looks_like_binary(&content);
      diff_print_index(zPathname, diffFlags);
      diff_file(&content, isBin, zFullName, zPathname, zDiffCmd,
                zBinGlob, fIncludeBinary, diffFlags);
      blob_reset(&content);
    }
    free(zToFree);
  }
  db_finalize(&q);
  db_end_transaction(1);  /* ROLLBACK */
}


































































/*
** Output the differences between two versions of a single file.
** zFrom and zTo are the check-ins containing the two file versions.
**
** Use the internal diff logic if zDiffCmd is NULL.  Otherwise call the
** command zDiffCmd to do the diffing.
**
** When using an external diff program, zBinGlob contains the GLOB patterns
** for file names to treat as binary.  If fIncludeBinary is zero, these files
** will be skipped in addition to files that may contain binary content.
*/
static void diff_one_two_versions(
  const char *zFrom,
  const char *zTo,
  const char *zDiffCmd,
  const char *zBinGlob,
  int fIncludeBinary,
  u64 diffFlags,
  const char *zFileTreeName
){
  char *zName;
  Blob fname;
  Blob v1, v2;
  int isLink1, isLink2;
  int isBin1, isBin2;
  if( diffFlags & DIFF_BRIEF ) return;
  file_tree_name(zFileTreeName, &fname, 0, 1);
  zName = blob_str(&fname);
  historical_version_of_file(zFrom, zName, &v1, &isLink1, 0,
                             fIncludeBinary ? 0 : &isBin1, 0);
  historical_version_of_file(zTo, zName, &v2, &isLink2, 0,
                             fIncludeBinary ? 0 : &isBin2, 0);
  if( isLink1 != isLink2 ){
    diff_print_filenames(zName, zName, diffFlags);







|




>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>













|
|
|
|
|
|
|







|







455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
      }
      isBin = fIncludeBinary ? 0 : looks_like_binary(&content);
      diff_print_index(zPathname, diffFlags);
      diff_file(&content, isBin, zFullName, zPathname, zDiffCmd,
                zBinGlob, fIncludeBinary, diffFlags);
      blob_reset(&content);
    }
    blob_reset(&fname);
  }
  db_finalize(&q);
  db_end_transaction(1);  /* ROLLBACK */
}

/*
** Do a diff of a single file named in zFile against the
** version of this file held in the undo buffer.
**
** Use the internal diff logic if zDiffCmd is NULL.  Otherwise call the
** command zDiffCmd to do the diffing.
**
** When using an external diff program, zBinGlob contains the GLOB patterns
** for file names to treat as binary.  If fIncludeBinary is zero, these files
** will be skipped in addition to files that may contain binary content.
*/
static void diff_one_against_undo(
  const char *zDiffCmd,     /* Use this "diff" command */
  const char *zBinGlob,     /* Treat file names matching this as binary */
  int fIncludeBinary,       /* Include binary files for external diff */
  u64 diffFlags,            /* Diff control flags */
  const char *zFile         /* Name of the file to be diffed */
){
  Blob fname;
  Blob content;

  blob_init(&content, 0, 0);
  file_tree_name(zFile, &fname, 0, 1);
  db_blob(&content, "SELECT content FROM undo WHERE pathname=%Q",
                    blob_str(&fname));
  if( blob_size(&content) ){
    diff_file(&content, 0, zFile, zFile,
              zDiffCmd, zBinGlob, fIncludeBinary, diffFlags);
  }
  blob_reset(&content);
  blob_reset(&fname);
}

/*
** Run a diff between the undo buffer and files on disk.
**
** Use the internal diff logic if zDiffCmd is NULL.  Otherwise call the
** command zDiffCmd to do the diffing.
**
** When using an external diff program, zBinGlob contains the GLOB patterns
** for file names to treat as binary.  If fIncludeBinary is zero, these files
** will be skipped in addition to files that may contain binary content.
*/
static void diff_all_against_undo(
  const char *zDiffCmd,     /* Use this diff command.  NULL for built-in */
  const char *zBinGlob,     /* Treat file names matching this as binary */
  int fIncludeBinary,       /* Treat file names matching this as binary */
  u64 diffFlags             /* Flags controlling diff output */
){
  Stmt q;
  Blob content;
  db_prepare(&q, "SELECT pathname, content FROM undo");
  blob_init(&content, 0, 0);
  while( db_step(&q)==SQLITE_ROW ){
    const char *zFile = (const char*)db_column_text(&q, 0);
    char *zFullName = mprintf("%s%s", g.zLocalRoot, zFile);
    db_column_blob(&q, 1, &content);
    diff_file(&content, 0, zFullName, zFile,
              zDiffCmd, zBinGlob, fIncludeBinary, diffFlags);
    fossil_free(zFullName);
    blob_reset(&content);
  }
  db_finalize(&q);
}

/*
** Output the differences between two versions of a single file.
** zFrom and zTo are the check-ins containing the two file versions.
**
** Use the internal diff logic if zDiffCmd is NULL.  Otherwise call the
** command zDiffCmd to do the diffing.
**
** When using an external diff program, zBinGlob contains the GLOB patterns
** for file names to treat as binary.  If fIncludeBinary is zero, these files
** will be skipped in addition to files that may contain binary content.
*/
static void diff_one_two_versions(
  const char *zFrom,            /* Version tag for the "before" file */
  const char *zTo,              /* Version tag for the "after" file */
  const char *zDiffCmd,         /* Use this "diff" command */
  const char *zBinGlob,         /* GLOB pattern for files that are binary */
  int fIncludeBinary,           /* True to show binary files */
  u64 diffFlags,                /* Diff flags */
  const char *zFile             /* Name of the file to be diffed */
){
  char *zName;
  Blob fname;
  Blob v1, v2;
  int isLink1, isLink2;
  int isBin1, isBin2;
  if( diffFlags & DIFF_BRIEF ) return;
  file_tree_name(zFile, &fname, 0, 1);
  zName = blob_str(&fname);
  historical_version_of_file(zFrom, zName, &v1, &isLink1, 0,
                             fIncludeBinary ? 0 : &isBin1, 0);
  historical_version_of_file(zTo, zName, &v2, &isLink2, 0,
                             fIncludeBinary ? 0 : &isBin2, 0);
  if( isLink1 != isLink2 ){
    diff_print_filenames(zName, zName, diffFlags);
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
}

/*
** Return the name of the external diff command, or return NULL if
** no external diff command is defined.
*/
const char *diff_command_external(int guiDiff){
  char *zDefault;
  const char *zName;

  if( guiDiff ){
#if defined(_WIN32)
    zDefault = "WinDiff.exe";
#else
    zDefault = 0;







|







698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
}

/*
** Return the name of the external diff command, or return NULL if
** no external diff command is defined.
*/
const char *diff_command_external(int guiDiff){
  const char *zDefault;
  const char *zName;

  if( guiDiff ){
#if defined(_WIN32)
    zDefault = "WinDiff.exe";
#else
    zDefault = 0;
745
746
747
748
749
750
751


752
753
754
755
756
757

758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773

774
775
776
777
778
779
780
781
782
783
784
785

786
787
788
789
790
791



792
793
794
795
796
797
798
799
800





801
802
803
804
805

806





807

808











809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826

827
828
829
830
831
832
833
834
835
836
**
** Options:
**   --binary PATTERN           Treat files that match the glob PATTERN as binary
**   --branch BRANCH            Show diff of all changes on BRANCH
**   --brief                    Show filenames only
**   --context|-c N             Use N lines of context
**   --diff-binary BOOL         Include binary files when using external commands


**   --from|-r VERSION          select VERSION as source for the diff
**   --internal|-i              use internal diff logic
**   --side-by-side|-y          side-by-side diff
**   --strip-trailing-cr        Strip trailing CR
**   --tk                       Launch a Tcl/Tk GUI for display
**   --to VERSION               select VERSION as target for the diff

**   --unified                  unified diff
**   -v|--verbose               output complete text of added or deleted files
**   -w|--ignore-all-space      Ignore white space when comparing lines
**   -W|--width <num>           Width of lines in side-by-side diff
**   -Z|--ignore-trailing-space Ignore changes to end-of-line whitespace
*/
void diff_cmd(void){
  int isGDiff;               /* True for gdiff.  False for normal diff */
  int isInternDiff;          /* True for internal diff */
  int verboseFlag;           /* True if -v or --verbose flag is used */
  const char *zFrom;         /* Source version number */
  const char *zTo;           /* Target version number */
  const char *zBranch;       /* Branch to diff */
  const char *zDiffCmd = 0;  /* External diff command. NULL for internal diff */
  const char *zBinGlob = 0;  /* Treat file names matching this as binary */
  int fIncludeBinary = 0;    /* Include binary files for external diff */

  u64 diffFlags = 0;         /* Flags to control the DIFF */
  int f;

  if( find_option("tk",0,0)!=0 ){
    diff_tk("diff", 2);
    return;
  }
  isGDiff = g.argv[1][0]=='g';
  isInternDiff = find_option("internal","i",0)!=0;
  zFrom = find_option("from", "r", 1);
  zTo = find_option("to", 0, 1);
  zBranch = find_option("branch", 0, 1);

  diffFlags = diff_options();
  verboseFlag = find_option("verbose","v",0)!=0;
  if( !verboseFlag ){
    verboseFlag = find_option("new-file","N",0)!=0; /* deprecated */
  }
  if( verboseFlag ) diffFlags |= DIFF_VERBOSE;



  if( zBranch ){
    if( zTo || zFrom ){
      fossil_fatal("cannot use --from or --to with --branch");
    }
    zTo = zBranch;
    zFrom = mprintf("root:%s", zBranch);
  }
  if( zTo==0 ){
    db_must_be_within_tree();





    if( !isInternDiff ){
      zDiffCmd = diff_command_external(isGDiff);
    }
    zBinGlob = diff_get_binary_glob();
    fIncludeBinary = diff_include_binary_files();

    verify_all_options();





    if( g.argc>=3 ){

      for(f=2; f<g.argc; ++f){











        diff_one_against_disk(zFrom, zDiffCmd, zBinGlob, fIncludeBinary,
                              diffFlags, g.argv[f]);
      }
    }else{
      diff_all_against_disk(zFrom, zDiffCmd, zBinGlob, fIncludeBinary,
                            diffFlags);
    }
  }else if( zFrom==0 ){
    fossil_fatal("must use --from if --to is present");
  }else{
    db_find_and_open_repository(0, 0);
    if( !isInternDiff ){
      zDiffCmd = diff_command_external(isGDiff);
    }
    zBinGlob = diff_get_binary_glob();
    fIncludeBinary = diff_include_binary_files();
    verify_all_options();
    if( g.argc>=3 ){

      for(f=2; f<g.argc; ++f){
        diff_one_two_versions(zFrom, zTo, zDiffCmd, zBinGlob, fIncludeBinary,
                              diffFlags, g.argv[f]);
      }
    }else{
      diff_all_two_versions(zFrom, zTo, zDiffCmd, zBinGlob, fIncludeBinary,
                            diffFlags);
    }
  }
}







>
>






>
















>

<










>






>
>
>







|

>
>
>
>
>
|
|
|
|
|
>
|
>
>
>
>
>

>
|
>
>
>
>
>
>
>
>
>
>
>

|





<
<

<
<
<
<
<
<
<

>
|

|







845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878

879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945


946







947
948
949
950
951
952
953
954
955
956
957
958
**
** Options:
**   --binary PATTERN           Treat files that match the glob PATTERN as binary
**   --branch BRANCH            Show diff of all changes on BRANCH
**   --brief                    Show filenames only
**   --context|-c N             Use N lines of context
**   --diff-binary BOOL         Include binary files when using external commands
**   --exec-abs-paths           Force absolute path names with external commands.
**   --exec-rel-paths           Force relative path names with external commands.
**   --from|-r VERSION          select VERSION as source for the diff
**   --internal|-i              use internal diff logic
**   --side-by-side|-y          side-by-side diff
**   --strip-trailing-cr        Strip trailing CR
**   --tk                       Launch a Tcl/Tk GUI for display
**   --to VERSION               select VERSION as target for the diff
**   --undo                     Diff against the "undo" buffer
**   --unified                  unified diff
**   -v|--verbose               output complete text of added or deleted files
**   -w|--ignore-all-space      Ignore white space when comparing lines
**   -W|--width <num>           Width of lines in side-by-side diff
**   -Z|--ignore-trailing-space Ignore changes to end-of-line whitespace
*/
void diff_cmd(void){
  int isGDiff;               /* True for gdiff.  False for normal diff */
  int isInternDiff;          /* True for internal diff */
  int verboseFlag;           /* True if -v or --verbose flag is used */
  const char *zFrom;         /* Source version number */
  const char *zTo;           /* Target version number */
  const char *zBranch;       /* Branch to diff */
  const char *zDiffCmd = 0;  /* External diff command. NULL for internal diff */
  const char *zBinGlob = 0;  /* Treat file names matching this as binary */
  int fIncludeBinary = 0;    /* Include binary files for external diff */
  int againstUndo = 0;       /* Diff against files in the undo buffer */
  u64 diffFlags = 0;         /* Flags to control the DIFF */


  if( find_option("tk",0,0)!=0 ){
    diff_tk("diff", 2);
    return;
  }
  isGDiff = g.argv[1][0]=='g';
  isInternDiff = find_option("internal","i",0)!=0;
  zFrom = find_option("from", "r", 1);
  zTo = find_option("to", 0, 1);
  zBranch = find_option("branch", 0, 1);
  againstUndo = find_option("undo",0,0)!=0;
  diffFlags = diff_options();
  verboseFlag = find_option("verbose","v",0)!=0;
  if( !verboseFlag ){
    verboseFlag = find_option("new-file","N",0)!=0; /* deprecated */
  }
  if( verboseFlag ) diffFlags |= DIFF_VERBOSE;
  if( againstUndo && (zFrom!=0 || zTo!=0 || zBranch!=0) ){
    fossil_fatal("cannot use --undo together with --from or --to or --branch");
  }
  if( zBranch ){
    if( zTo || zFrom ){
      fossil_fatal("cannot use --from or --to with --branch");
    }
    zTo = zBranch;
    zFrom = mprintf("root:%s", zBranch);
  }
  if( zTo==0 || againstUndo ){
    db_must_be_within_tree();
  }else if( zFrom==0 ){
    fossil_fatal("must use --from if --to is present");
  }else{
    db_find_and_open_repository(0, 0);
  }
  if( !isInternDiff ){
    zDiffCmd = diff_command_external(isGDiff);
  }
  zBinGlob = diff_get_binary_glob();
  fIncludeBinary = diff_include_binary_files();
  determine_exec_relative_option(1);
  verify_all_options();
  if( againstUndo ){
    if( db_lget_int("undo_available",0)==0 ){
      fossil_print("No undo or redo is available\n");
      return;
    }
    if( g.argc>=3 ){
      int i;
      for(i=2; i<g.argc; i++){
        diff_one_against_undo(zDiffCmd, zBinGlob, fIncludeBinary,
                              diffFlags, g.argv[i]);
      }
    }else{
      diff_all_against_undo(zDiffCmd, zBinGlob, fIncludeBinary,
                            diffFlags);
    }
  }else if( zTo==0 ){
    if( g.argc>=3 ){
      int i;
      for(i=2; i<g.argc; i++){
        diff_one_against_disk(zFrom, zDiffCmd, zBinGlob, fIncludeBinary,
                              diffFlags, g.argv[i]);
      }
    }else{
      diff_all_against_disk(zFrom, zDiffCmd, zBinGlob, fIncludeBinary,
                            diffFlags);
    }


  }else{







    if( g.argc>=3 ){
      int i;
      for(i=2; i<g.argc; i++){
        diff_one_two_versions(zFrom, zTo, zDiffCmd, zBinGlob, fIncludeBinary,
                              diffFlags, g.argv[i]);
      }
    }else{
      diff_all_two_versions(zFrom, zTo, zDiffCmd, zBinGlob, fIncludeBinary,
                            diffFlags);
    }
  }
}
Changes to src/doc.c.
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185

186
187
188
189
190
191
192
  { "kar",        3, "audio/midi"                        },
  { "latex",      5, "application/x-latex"               },
  { "lha",        3, "application/octet-stream"          },
  { "lsp",        3, "application/x-lisp"                },
  { "lzh",        3, "application/octet-stream"          },
  { "m",          1, "text/plain"                        },
  { "m3u",        3, "audio/x-mpegurl"                   },
  { "man",        3, "application/x-troff-man"           },
  { "markdown",   8, "text/x-markdown"                   },
  { "md",         2, "text/x-markdown"                   },
  { "me",         2, "application/x-troff-me"            },
  { "mesh",       4, "model/mesh"                        },
  { "mid",        3, "audio/midi"                        },
  { "midi",       4, "audio/midi"                        },
  { "mif",        3, "application/x-mif"                 },
  { "mime",       4, "www/mime"                          },
  { "mkd",        3, "text/x-markdown"                   },
  { "mov",        3, "video/quicktime"                   },
  { "movie",      5, "video/x-sgi-movie"                 },
  { "mp2",        3, "audio/mpeg"                        },
  { "mp3",        3, "audio/mpeg"                        },
  { "mp4",        3, "video/mp4"                         },
  { "mpe",        3, "video/mpeg"                        },
  { "mpeg",       4, "video/mpeg"                        },
  { "mpg",        3, "video/mpeg"                        },
  { "mpga",       4, "audio/mpeg"                        },
  { "ms",         2, "application/x-troff-ms"            },
  { "msh",        3, "model/mesh"                        },

  { "nc",         2, "application/x-netcdf"              },
  { "oda",        3, "application/oda"                   },
  { "odp",        3, "application/vnd.oasis.opendocument.presentation" },
  { "ods",        3, "application/vnd.oasis.opendocument.spreadsheet" },
  { "odt",        3, "application/vnd.oasis.opendocument.text" },
  { "ogg",        3, "application/ogg"                   },
  { "ogm",        3, "application/ogg"                   },







|




















>







158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
  { "kar",        3, "audio/midi"                        },
  { "latex",      5, "application/x-latex"               },
  { "lha",        3, "application/octet-stream"          },
  { "lsp",        3, "application/x-lisp"                },
  { "lzh",        3, "application/octet-stream"          },
  { "m",          1, "text/plain"                        },
  { "m3u",        3, "audio/x-mpegurl"                   },
  { "man",        3, "text/plain"                        },
  { "markdown",   8, "text/x-markdown"                   },
  { "md",         2, "text/x-markdown"                   },
  { "me",         2, "application/x-troff-me"            },
  { "mesh",       4, "model/mesh"                        },
  { "mid",        3, "audio/midi"                        },
  { "midi",       4, "audio/midi"                        },
  { "mif",        3, "application/x-mif"                 },
  { "mime",       4, "www/mime"                          },
  { "mkd",        3, "text/x-markdown"                   },
  { "mov",        3, "video/quicktime"                   },
  { "movie",      5, "video/x-sgi-movie"                 },
  { "mp2",        3, "audio/mpeg"                        },
  { "mp3",        3, "audio/mpeg"                        },
  { "mp4",        3, "video/mp4"                         },
  { "mpe",        3, "video/mpeg"                        },
  { "mpeg",       4, "video/mpeg"                        },
  { "mpg",        3, "video/mpeg"                        },
  { "mpga",       4, "audio/mpeg"                        },
  { "ms",         2, "application/x-troff-ms"            },
  { "msh",        3, "model/mesh"                        },
  { "n",          1, "text/plain"                        },
  { "nc",         2, "application/x-netcdf"              },
  { "oda",        3, "application/oda"                   },
  { "odp",        3, "application/vnd.oasis.opendocument.presentation" },
  { "ods",        3, "application/vnd.oasis.opendocument.spreadsheet" },
  { "odt",        3, "application/vnd.oasis.opendocument.text" },
  { "ogg",        3, "application/ogg"                   },
  { "ogm",        3, "application/ogg"                   },
520
521
522
523
524
525
526
527
528


529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544



545
546
547
548
549
550
551

552
553
554
555
556
557
558
559
560
561
562
**
** The "ckout" CHECKIN is intended for development - to provide a mechanism
** for looking at what a file will look like using the /doc webpage after
** it gets checked in.
**
** The file extension is used to decide how to render the file.
**
** If FILE ends in "/" then names "FILE/index.html", "FILE/index.wiki",
** and "FILE/index.md" are  in that order.  If none of those are found,


** then FILE is completely replaced by "404.md" and tried.  If that is
** not found, then a default 404 screen is generated.
*/
void doc_page(void){
  const char *zName;                /* Argument to the /doc page */
  const char *zOrigName = "?";      /* Original document name */
  const char *zMime;                /* Document MIME type */
  char *zCheckin = "tip";           /* The check-in holding the document */
  int vid = 0;                      /* Artifact of check-in */
  int rid = 0;                      /* Artifact of file */
  int i;                            /* Loop counter */
  Blob filebody;                    /* Content of the documentation file */
  Blob title;                       /* Document title */
  int nMiss = (-1);                 /* Failed attempts to find the document */
  static const char *const azSuffix[] = {
     "index.html", "index.wiki", "index.md"



  };

  login_check_credentials();
  if( !g.perm.Read ){ login_needed(g.anon.Read); return; }
  blob_init(&title, 0, 0);
  db_begin_transaction();
  while( rid==0 && (++nMiss)<=ArraySize(azSuffix) ){

    zName = PD("name", "tip/index.wiki");
    for(i=0; zName[i] && zName[i]!='/'; i++){}
    zCheckin = mprintf("%.*s", i, zName);
    if( fossil_strcmp(zCheckin,"ckout")==0 && db_open_local(0)==0 ){
      zCheckin = "tip";
    }
    if( nMiss==ArraySize(azSuffix) ){
      zName = "404.md";
    }else if( zName[i]==0 ){
      assert( nMiss>=0 && nMiss<ArraySize(azSuffix) );
      zName = azSuffix[nMiss];







|
|
>
>
|
|














>
>
>







>
|


|







521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
**
** The "ckout" CHECKIN is intended for development - to provide a mechanism
** for looking at what a file will look like using the /doc webpage after
** it gets checked in.
**
** The file extension is used to decide how to render the file.
**
** If FILE ends in "/" then the names "FILE/index.html", "FILE/index.wiki",
** and "FILE/index.md" are tried in that order.  If the binary was compiled
** with TH1 embedded documentation support and the "th1-docs" setting is
** enabled, the name "FILE/index.th1" is also tried.  If none of those are
** found, then FILE is completely replaced by "404.md" and tried.  If that
** is not found, then a default 404 screen is generated.
*/
void doc_page(void){
  const char *zName;                /* Argument to the /doc page */
  const char *zOrigName = "?";      /* Original document name */
  const char *zMime;                /* Document MIME type */
  char *zCheckin = "tip";           /* The check-in holding the document */
  int vid = 0;                      /* Artifact of check-in */
  int rid = 0;                      /* Artifact of file */
  int i;                            /* Loop counter */
  Blob filebody;                    /* Content of the documentation file */
  Blob title;                       /* Document title */
  int nMiss = (-1);                 /* Failed attempts to find the document */
  static const char *const azSuffix[] = {
     "index.html", "index.wiki", "index.md"
#ifdef FOSSIL_ENABLE_TH1_DOCS
      , "index.th1"
#endif
  };

  login_check_credentials();
  if( !g.perm.Read ){ login_needed(g.anon.Read); return; }
  blob_init(&title, 0, 0);
  db_begin_transaction();
  while( rid==0 && (++nMiss)<=ArraySize(azSuffix) ){
    zName = P("name");
    if( zName==0 || zName[0]==0 ) zName = "tip/index.wiki";
    for(i=0; zName[i] && zName[i]!='/'; i++){}
    zCheckin = mprintf("%.*s", i, zName);
    if( fossil_strcmp(zCheckin,"ckout")==0 && g.localOpen==0 ){
      zCheckin = "tip";
    }
    if( nMiss==ArraySize(azSuffix) ){
      zName = "404.md";
    }else if( zName[i]==0 ){
      assert( nMiss>=0 && nMiss<ArraySize(azSuffix) );
      zName = azSuffix[nMiss];
Changes to src/foci.c.
165
166
167
168
169
170
171
172
173
174

175
176
177
178

179
180
181
182
183
184
185
  int idxNum, const char *idxStr,
  int argc, sqlite3_value **argv
){
  FociCursor *pCur = (FociCursor *)pCursor;
  manifest_destroy(pCur->pMan);
  if( idxNum ){
    pCur->pMan = manifest_get(sqlite3_value_int(argv[0]), CFTYPE_MANIFEST, 0);
    pCur->iFile = 0;
    manifest_file_rewind(pCur->pMan);
    pCur->pFile = manifest_file_next(pCur->pMan, 0);

  }else{
    pCur->pMan = 0;
    pCur->iFile = 0;
  }

  return SQLITE_OK;
}

static int fociColumn(
  sqlite3_vtab_cursor *pCursor,
  sqlite3_context *ctx,
  int i







|
|
|
>


<

>







165
166
167
168
169
170
171
172
173
174
175
176
177

178
179
180
181
182
183
184
185
186
  int idxNum, const char *idxStr,
  int argc, sqlite3_value **argv
){
  FociCursor *pCur = (FociCursor *)pCursor;
  manifest_destroy(pCur->pMan);
  if( idxNum ){
    pCur->pMan = manifest_get(sqlite3_value_int(argv[0]), CFTYPE_MANIFEST, 0);
    if( pCur->pMan ){
      manifest_file_rewind(pCur->pMan);
      pCur->pFile = manifest_file_next(pCur->pMan, 0);
    }
  }else{
    pCur->pMan = 0;

  }
  pCur->iFile = 0;
  return SQLITE_OK;
}

static int fociColumn(
  sqlite3_vtab_cursor *pCursor,
  sqlite3_context *ctx,
  int i
Changes to src/http_transport.c.
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
  }
}

/*
** Default SSH command
*/
#ifdef _WIN32
static char zDefaultSshCmd[] = "plink -ssh -T";
#else
static char zDefaultSshCmd[] = "ssh -e none -T";
#endif

/*
** SSH initialization of the transport layer
*/
int transport_ssh_open(UrlData *pUrlData){
  /* For SSH we need to create and run SSH fossil http
  ** to talk to the remote machine.
  */
  const char *zSsh;  /* The base SSH command */
  Blob zCmd;         /* The SSH command */
  char *zHost;       /* The host name to contact */
  int n;             /* Size of prefix string */

  socket_ssh_resolve_addr(pUrlData);
  zSsh = db_get("ssh-command", zDefaultSshCmd);
  blob_init(&zCmd, zSsh, -1);







|

|









|







76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
  }
}

/*
** Default SSH command
*/
#ifdef _WIN32
static const char zDefaultSshCmd[] = "plink -ssh -T";
#else
static const char zDefaultSshCmd[] = "ssh -e none -T";
#endif

/*
** SSH initialization of the transport layer
*/
int transport_ssh_open(UrlData *pUrlData){
  /* For SSH we need to create and run SSH fossil http
  ** to talk to the remote machine.
  */
  char *zSsh;        /* The base SSH command */
  Blob zCmd;         /* The SSH command */
  char *zHost;       /* The host name to contact */
  int n;             /* Size of prefix string */

  socket_ssh_resolve_addr(pUrlData);
  zSsh = db_get("ssh-command", zDefaultSshCmd);
  blob_init(&zCmd, zSsh, -1);
Changes to src/import.c.
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
          fossil_fatal("Missing Node-kind");
        }
        if( strncmp(zKind, "dir", 3)!=0 ){
          if( deltaFlag ){
            Blob deltaSrc;
            Blob target;
            rid = db_int(0, "SELECT rid FROM blob WHERE uuid=("
                            " SELECT uuid FROM xfiles"
                            "  WHERE tpath=%Q AND tbranch=%d"
                            ")", zFile, branchId);
            content_get(rid, &deltaSrc);
            svn_apply_svndiff(&rec.content, &deltaSrc, &target);
            rid = content_put(&target);
          }else{
            rid = content_put(&rec.content);







|







1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
          fossil_fatal("Missing Node-kind");
        }
        if( strncmp(zKind, "dir", 3)!=0 ){
          if( deltaFlag ){
            Blob deltaSrc;
            Blob target;
            rid = db_int(0, "SELECT rid FROM blob WHERE uuid=("
                            " SELECT tuuid FROM xfiles"
                            "  WHERE tpath=%Q AND tbranch=%d"
                            ")", zFile, branchId);
            content_get(rid, &deltaSrc);
            svn_apply_svndiff(&rec.content, &deltaSrc, &target);
            rid = content_put(&target);
          }else{
            rid = content_put(&rec.content);
Changes to src/info.c.
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
  if( (objType & (OBJTYPE_WIKI|OBJTYPE_TICKET))!=0 ){
    style_submenu_element("Parsed", "Parsed", "%R/info/%s", zUuid);
  }
  if( descOnly ){
    style_submenu_element("Content", "Content", "%R/artifact/%s", zUuid);
  }else{
    style_submenu_element("Line Numbers", "Line Numbers",
                          "%R/info/%s%s",zUuid,
                          ((zLn&&*zLn) ? "" : "?txt=1&ln=0"));
    @ <hr />
    content_get(rid, &content);
    if( renderAsWiki ){
      wiki_render_by_mimetype(&content, zMime);
    }else if( renderAsHtml ){
      @ <iframe src="%R/raw/%T(blob_str(&downloadName))?name=%s(zUuid)"







|







1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
  if( (objType & (OBJTYPE_WIKI|OBJTYPE_TICKET))!=0 ){
    style_submenu_element("Parsed", "Parsed", "%R/info/%s", zUuid);
  }
  if( descOnly ){
    style_submenu_element("Content", "Content", "%R/artifact/%s", zUuid);
  }else{
    style_submenu_element("Line Numbers", "Line Numbers",
                          "%R/artifact/%s%s",zUuid,
                          ((zLn&&*zLn) ? "" : "?txt=1&ln=0"));
    @ <hr />
    content_get(rid, &content);
    if( renderAsWiki ){
      wiki_render_by_mimetype(&content, zMime);
    }else if( renderAsHtml ){
      @ <iframe src="%R/raw/%T(blob_str(&downloadName))?name=%s(zUuid)"
2274
2275
2276
2277
2278
2279
2280



























































































































2281
2282
2283
2284
2285
2286
2287
    }
    return 0;
  }
  while( fossil_isspace(zB[0]) ) zB++;
  while( fossil_isspace(zA[0]) ) zA++;
  return zA[0]==0 && zB[0]==0;
}




























































































































/*
** WEBPAGE: ci_edit
** URL:  /ci_edit?r=RID&c=NEWCOMMENT&u=NEWUSER
**
** Present a dialog for updating properties of a check-in.
**







>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>







2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
2308
2309
2310
2311
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
2322
2323
2324
2325
2326
2327
2328
2329
2330
2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
2341
2342
2343
2344
2345
2346
2347
2348
2349
2350
2351
2352
2353
2354
2355
2356
2357
2358
2359
2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
2383
2384
2385
2386
2387
2388
2389
2390
2391
2392
2393
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
2407
2408
2409
2410
    }
    return 0;
  }
  while( fossil_isspace(zB[0]) ) zB++;
  while( fossil_isspace(zA[0]) ) zA++;
  return zA[0]==0 && zB[0]==0;
}

/*
** The following methods operate on the newtags temporary table
** that is used to collect various changes to be added to a control
** artifact for a check-in edit.
*/
static void init_newtags(void){
  db_multi_exec("CREATE TEMP TABLE newtags(tag UNIQUE, prefix, value)");
}

static void change_special(
  const char *zName,    /* Name of the special tag */
  const char *zOp,      /* Operation prefix (e.g. +,-,*) */
  const char *zValue    /* Value of the tag */
){
  db_multi_exec("REPLACE INTO newtags VALUES(%Q,'%q',%Q)", zName, zOp, zValue);
}

static void change_sym_tag(const char *zTag, const char *zOp){
  db_multi_exec("REPLACE INTO newtags VALUES('sym-%q',%Q,NULL)", zTag, zOp);
}

static void cancel_special(const char *zTag){
  change_special(zTag,"-",0);
}

static void add_color(const char *zNewColor, int fPropagateColor){
  change_special("bgcolor",fPropagateColor ? "*" : "+", zNewColor);
}

static void cancel_color(void){
  change_special("bgcolor","-",0);
}

static void add_comment(const char *zNewComment){
  change_special("comment","+",zNewComment);
}

static void add_date(const char *zNewDate){
  change_special("date","+",zNewDate);
}

static void add_user(const char *zNewUser){
  change_special("user","+",zNewUser);
}

static void add_tag(const char *zNewTag){
  change_sym_tag(zNewTag,"+");
}

static void cancel_tag(int rid, const char *zCancelTag){
  if( db_exists("SELECT 1 FROM tagxref, tag"
                " WHERE tagxref.rid=%d AND tagtype>0"
                "   AND tagxref.tagid=tag.tagid AND tagname='sym-%q'",
                rid, zCancelTag)
  ) change_sym_tag(zCancelTag,"-");
}

static void hide_branch(void){
  change_special("hidden","*",0);
}

static void close_leaf(int rid){
  change_special("closed",is_a_leaf(rid)?"+":"*",0);
}

static void change_branch(int rid, const char *zNewBranch){
  db_multi_exec(
    "REPLACE INTO newtags "
    " SELECT tagname, '-', NULL FROM tagxref, tag"
    "  WHERE tagxref.rid=%d AND tagtype==2"
    "    AND tagname GLOB 'sym-*'"
    "    AND tag.tagid=tagxref.tagid",
    rid
  );
  change_special("branch","*",zNewBranch);
  change_sym_tag(zNewBranch,"*");
}

/*
** The apply_newtags method is called after all newtags have been added
** and the control artifact is completed and then written to the DB.
*/
static void apply_newtags(Blob *ctrl, int rid, const char *zUuid){
  Stmt q;
  int nChng = 0;

  db_prepare(&q, "SELECT tag, prefix, value FROM newtags"
                 " ORDER BY prefix || tag");
  while( db_step(&q)==SQLITE_ROW ){
    const char *zTag = db_column_text(&q, 0);
    const char *zPrefix = db_column_text(&q, 1);
    const char *zValue = db_column_text(&q, 2);
    nChng++;
    if( zValue ){
      blob_appendf(ctrl, "T %s%F %s %F\n", zPrefix, zTag, zUuid, zValue);
    }else{
      blob_appendf(ctrl, "T %s%F %s\n", zPrefix, zTag, zUuid);
    }
  }
  db_finalize(&q);
  if( nChng>0 ){
    int nrid;
    Blob cksum;
    blob_appendf(ctrl, "U %F\n", login_name());
    md5sum_blob(ctrl, &cksum);
    blob_appendf(ctrl, "Z %b\n", &cksum);
    db_begin_transaction();
    g.markPrivate = content_is_private(rid);
    nrid = content_put(ctrl);
    manifest_crosslink(nrid, ctrl, MC_PERMIT_HOOKS);
    assert( blob_is_reset(ctrl) );
    db_end_transaction(0);
  }
}

/*
** This method checks that the date can be parsed.
** Returns 1 if datetime() can validate, 0 otherwise.
*/
int is_datetime(const char* zDate){
  return db_int(0, "SELECT datetime(%Q) NOT NULL", zDate);
}

/*
** WEBPAGE: ci_edit
** URL:  /ci_edit?r=RID&c=NEWCOMMENT&u=NEWUSER
**
** Present a dialog for updating properties of a check-in.
**
2352
2353
2354
2355
2356
2357
2358
2359
2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
2383
2384
2385
2386
2387
2388
2389
2390
2391
2392
2393
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
2422
2423
2424
2425
2426
2427
2428
2429
2430
2431
2432
2433
2434
2435
2436
2437
2438
2439
2440
2441
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
2452
2453
2454
2455
2456
2457
2458
2459
2460
2461
2462
  zNewBrFlag = P("newbr") ? " checked" : "";
  zNewBranch = PDT("brname","");
  zCloseFlag = P("close") ? " checked" : "";
  zHideFlag = P("hide") ? " checked" : "";
  if( P("apply") ){
    Blob ctrl;
    char *zNow;
    int nChng = 0;

    login_verify_csrf_secret();
    blob_zero(&ctrl);
    zNow = date_in_standard_format(zChngTime ? zChngTime : "now");
    blob_appendf(&ctrl, "D %s\n", zNow);
    db_multi_exec("CREATE TEMP TABLE newtags(tag UNIQUE, prefix, value)");
    if( zNewColor[0]
     && (fPropagateColor!=fNewPropagateColor
             || fossil_strcmp(zColor,zNewColor)!=0)
    ){
      char *zPrefix = "+";
      if( fNewPropagateColor ){
        zPrefix = "*";
      }
      db_multi_exec("REPLACE INTO newtags VALUES('bgcolor',%Q,%Q)",
                    zPrefix, zNewColor);
    }
    if( zNewColor[0]==0 && zColor[0]!=0 ){
      db_multi_exec("REPLACE INTO newtags VALUES('bgcolor','-',NULL)");
    }
    if( comment_compare(zComment,zNewComment)==0 ){
      db_multi_exec("REPLACE INTO newtags VALUES('comment','+',%Q)",
                    zNewComment);
    }
    if( fossil_strcmp(zDate,zNewDate)!=0 ){
      db_multi_exec("REPLACE INTO newtags VALUES('date','+',%Q)",
                    zNewDate);
    }
    if( fossil_strcmp(zUser,zNewUser)!=0 ){
      db_multi_exec("REPLACE INTO newtags VALUES('user','+',%Q)", zNewUser);
    }
    db_prepare(&q,
       "SELECT tag.tagid, tagname FROM tagxref, tag"
       " WHERE tagxref.rid=%d AND tagtype>0 AND tagxref.tagid=tag.tagid",
       rid
    );
    while( db_step(&q)==SQLITE_ROW ){
      int tagid = db_column_int(&q, 0);
      const char *zTag = db_column_text(&q, 1);
      char zLabel[30];
      sqlite3_snprintf(sizeof(zLabel), zLabel, "c%d", tagid);
      if( P(zLabel) ){
        db_multi_exec("REPLACE INTO newtags VALUES(%Q,'-',NULL)", zTag);
      }
    }
    db_finalize(&q);
    if( zHideFlag[0] ){
      db_multi_exec("REPLACE INTO newtags VALUES('hidden','*',NULL)");
    }
    if( zCloseFlag[0] ){
      db_multi_exec("REPLACE INTO newtags VALUES('closed','%s',NULL)",
          is_a_leaf(rid)?"+":"*");
    }
    if( zNewTagFlag[0] && zNewTag[0] ){
      db_multi_exec("REPLACE INTO newtags VALUES('sym-%q','+',NULL)", zNewTag);
    }
    if( zNewBrFlag[0] && zNewBranch[0] ){
      db_multi_exec(
        "REPLACE INTO newtags "
        " SELECT tagname, '-', NULL FROM tagxref, tag"
        "  WHERE tagxref.rid=%d AND tagtype==2"
        "    AND tagname GLOB 'sym-*'"
        "    AND tag.tagid=tagxref.tagid",
        rid
      );
      db_multi_exec("REPLACE INTO newtags VALUES('branch','*',%Q)", zNewBranch);
      db_multi_exec("REPLACE INTO newtags VALUES('sym-%q','*',NULL)",
                    zNewBranch);
    }
    db_prepare(&q, "SELECT tag, prefix, value FROM newtags"
                   " ORDER BY prefix || tag");
    while( db_step(&q)==SQLITE_ROW ){
      const char *zTag = db_column_text(&q, 0);
      const char *zPrefix = db_column_text(&q, 1);
      const char *zValue = db_column_text(&q, 2);
      nChng++;
      if( zValue ){
        blob_appendf(&ctrl, "T %s%F %s %F\n", zPrefix, zTag, zUuid, zValue);
      }else{
        blob_appendf(&ctrl, "T %s%F %s\n", zPrefix, zTag, zUuid);
      }
    }
    db_finalize(&q);
    if( nChng>0 ){
      int nrid;
      Blob cksum;
      blob_appendf(&ctrl, "U %F\n", login_name());
      md5sum_blob(&ctrl, &cksum);
      blob_appendf(&ctrl, "Z %b\n", &cksum);
      db_begin_transaction();
      g.markPrivate = content_is_private(rid);
      nrid = content_put(&ctrl);
      manifest_crosslink(nrid, &ctrl, MC_PERMIT_HOOKS);
      assert( blob_is_reset(&ctrl) );
      db_end_transaction(0);
    }
    cgi_redirectf("ci?name=%s", zUuid);
  }
  blob_zero(&comment);
  blob_append(&comment, zNewComment, -1);
  zUuid[10] = 0;
  style_header("Edit Check-in [%s]", zUuid);
  /*







<





|



<
<
|
<
<
<
<
<
|
<
<
|
<
<
<
|
<
<
<
|
<
<










|
<
|
<

|
<
<
|
<
<
<
|
<
<
|
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
|
<
<
<
<
<
<
<
<







2475
2476
2477
2478
2479
2480
2481

2482
2483
2484
2485
2486
2487
2488
2489
2490


2491





2492


2493



2494



2495


2496
2497
2498
2499
2500
2501
2502
2503
2504
2505
2506

2507

2508
2509


2510



2511


2512






























2513








2514
2515
2516
2517
2518
2519
2520
  zNewBrFlag = P("newbr") ? " checked" : "";
  zNewBranch = PDT("brname","");
  zCloseFlag = P("close") ? " checked" : "";
  zHideFlag = P("hide") ? " checked" : "";
  if( P("apply") ){
    Blob ctrl;
    char *zNow;


    login_verify_csrf_secret();
    blob_zero(&ctrl);
    zNow = date_in_standard_format(zChngTime ? zChngTime : "now");
    blob_appendf(&ctrl, "D %s\n", zNow);
    init_newtags();
    if( zNewColor[0]
     && (fPropagateColor!=fNewPropagateColor
             || fossil_strcmp(zColor,zNewColor)!=0)


    ) add_color(zNewColor,fNewPropagateColor);





    if( zNewColor[0]==0 && zColor[0]!=0 ) cancel_color();


    if( comment_compare(zComment,zNewComment)==0 ) add_comment(zNewComment);



    if( fossil_strcmp(zDate,zNewDate)!=0 ) add_date(zNewDate);



    if( fossil_strcmp(zUser,zNewUser)!=0 ) add_user(zNewUser);


    db_prepare(&q,
       "SELECT tag.tagid, tagname FROM tagxref, tag"
       " WHERE tagxref.rid=%d AND tagtype>0 AND tagxref.tagid=tag.tagid",
       rid
    );
    while( db_step(&q)==SQLITE_ROW ){
      int tagid = db_column_int(&q, 0);
      const char *zTag = db_column_text(&q, 1);
      char zLabel[30];
      sqlite3_snprintf(sizeof(zLabel), zLabel, "c%d", tagid);
      if( P(zLabel) ) cancel_special(zTag);

    }

    db_finalize(&q);
    if( zHideFlag[0] ) hide_branch();


    if( zCloseFlag[0] ) close_leaf(rid);



    if( zNewTagFlag[0] && zNewTag[0] ) add_tag(zNewTag);


    if( zNewBrFlag[0] && zNewBranch[0] ) change_branch(rid,zNewBranch);






























    apply_newtags(&ctrl, rid, zUuid);








    cgi_redirectf("ci?name=%s", zUuid);
  }
  blob_zero(&comment);
  blob_append(&comment, zNewComment, -1);
  zUuid[10] = 0;
  style_header("Edit Check-in [%s]", zUuid);
  /*
2650
2651
2652
2653
2654
2655
2656















































































































































































































  @ <input type="submit" name="apply" value="Apply Changes" />
  @ <input type="submit" name="cancel" value="Cancel" />
  @ </td></tr>
  @ </table>
  @ </div></form>
  style_footer();
}






















































































































































































































>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
2708
2709
2710
2711
2712
2713
2714
2715
2716
2717
2718
2719
2720
2721
2722
2723
2724
2725
2726
2727
2728
2729
2730
2731
2732
2733
2734
2735
2736
2737
2738
2739
2740
2741
2742
2743
2744
2745
2746
2747
2748
2749
2750
2751
2752
2753
2754
2755
2756
2757
2758
2759
2760
2761
2762
2763
2764
2765
2766
2767
2768
2769
2770
2771
2772
2773
2774
2775
2776
2777
2778
2779
2780
2781
2782
2783
2784
2785
2786
2787
2788
2789
2790
2791
2792
2793
2794
2795
2796
2797
2798
2799
2800
2801
2802
2803
2804
2805
2806
2807
2808
2809
2810
2811
2812
2813
2814
2815
2816
2817
2818
2819
2820
2821
2822
2823
2824
2825
2826
2827
2828
2829
2830
2831
2832
2833
2834
2835
2836
2837
2838
2839
2840
2841
2842
2843
2844
2845
2846
2847
2848
2849
2850
2851
2852
2853
2854
2855
2856
2857
2858
2859
2860
2861
2862
2863
2864
2865
2866
2867
2868
2869
2870
2871
2872
2873
2874
2875
2876
2877
2878
2879
2880
2881
2882
2883
2884
2885
2886
2887
2888
2889
2890
2891
2892
2893
2894
2895
2896
2897
2898
2899
2900
2901
2902
2903
2904
2905
2906
2907
2908
2909
2910
2911
2912
2913
2914
2915
2916
2917
2918
2919
2920
2921
  @ <input type="submit" name="apply" value="Apply Changes" />
  @ <input type="submit" name="cancel" value="Cancel" />
  @ </td></tr>
  @ </table>
  @ </div></form>
  style_footer();
}

/*
** Prepare an ammended commit comment.  Let the user modify it using the
** editor specified in the global_config table or either
** the VISUAL or EDITOR environment variable.
**
** Store the final commit comment in pComment.  pComment is assumed
** to be uninitialized - any prior content is overwritten.
**
** Use zInit to initialize the check-in comment so that the user does
** not have to retype.
*/
static void prepare_amend_comment(
  Blob *pComment,
  const char *zInit,
  const char *zUuid
){
  Blob prompt;
#if defined(_WIN32) || defined(__CYGWIN__)
  int bomSize;
  const unsigned char *bom = get_utf8_bom(&bomSize);
  blob_init(&prompt, (const char *) bom, bomSize);
  if( zInit && zInit[0]){
    blob_append(&prompt, zInit, -1);
  }
#else
  blob_init(&prompt, zInit, -1);
#endif
  blob_append(&prompt, "\n# Enter a new comment for check-in ", -1);
  if( zUuid && zUuid[0] ){
    blob_append(&prompt, zUuid, -1);
  }
  blob_append(&prompt, ".\n# Lines beginning with a # are ignored.\n", -1);
  prompt_for_user_comment(pComment, &prompt);
  blob_reset(&prompt);
}

#define AMEND_USAGE_STMT "UUID OPTION ?OPTION ...?"
/*
** COMMAND: amend
**
** Usage: %fossil amend UUID OPTION ?OPTION ...?
**
** Amend the tags on check-in UUID to change how it displays in the timeline.
**
** Options:
**
**    --author USER           Make USER the author for check-in
**    -m|--comment COMMENT    Make COMMENT the check-in comment
**    -M|--message-file FILE  Read the amended comment from FILE
**    --edit-comment          Launch editor to revise comment
**    --date DATE             Make DATE the check-in time
**    --bgcolor COLOR         Apply COLOR to this check-in
**    --branchcolor COLOR     Apply and propagate COLOR to the branch
**    --tag TAG               Add new TAG to this check-in
**    --cancel TAG            Cancel TAG from this check-in
**    --branch NAME           Make this check-in the start of branch NAME
**    --hide                  Hide branch starting from this check-in
**    --close                 Mark this "leaf" as closed
*/
void ci_amend_cmd(void){
  int rid;
  const char *zComment;         /* Current comment on the check-in */
  const char *zNewComment;      /* Revised check-in comment */
  const char *zComFile;         /* Filename from which to read comment */
  const char *zUser;            /* Current user for the check-in */
  const char *zNewUser;         /* Revised user */
  const char *zDate;            /* Current date of the check-in */
  const char *zNewDate;         /* Revised check-in date */
  const char *zColor;
  const char *zNewColor;
  const char *zNewBrColor;
  const char *zNewBranch;
  const char **pzNewTags = 0;
  const char **pzCancelTags = 0;
  int fClose;                   /* True if leaf should be closed */
  int fHide;                    /* True if branch should be hidden */
  int fPropagateColor;          /* True if color propagates before amend */
  int fNewPropagateColor = 0;   /* True if color propagates after amend */
  int fHasHidden = 0;           /* True if hidden tag already set */
  int fHasClosed = 0;           /* True if closed tag already set */
  int fEditComment;             /* True if editor to be used for comment */
  const char *zChngTime;        /* The change time on the control artifact */
  const char *zUuid;
  Blob ctrl;
  Blob comment;
  char *zNow;
  int nTags, nCancels;
  int i;
  Stmt q;

  if( g.argc==3 ) usage(AMEND_USAGE_STMT);
  fEditComment = find_option("edit-comment",0,0)!=0;
  zNewComment = find_option("comment","m",1);
  zComFile = find_option("message-file","M",1);
  zNewBranch = find_option("branch",0,1);
  zNewColor = find_option("bgcolor",0,1);
  zNewBrColor = find_option("branchcolor",0,1);
  if( zNewBrColor ){
    zNewColor = zNewBrColor;
    fNewPropagateColor = 1;
  }
  zNewDate = find_option("date",0,1);
  zNewUser = find_option("author",0,1);
  pzNewTags = find_repeatable_option("tag",0,&nTags);
  pzCancelTags = find_repeatable_option("cancel",0,&nCancels);
  fClose = find_option("close",0,0)!=0;
  fHide = find_option("hide",0,0)!=0;
  zChngTime = find_option("chngtime",0,1);
  db_find_and_open_repository(0,0);
  user_select();
  verify_all_options();
  if( g.argc<3 || g.argc>=4 ) usage(AMEND_USAGE_STMT);
  rid = name_to_typed_rid(g.argv[2], "ci");
  if( rid==0 && !is_a_version(rid) ) fossil_fatal("no such check-in");
  zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid);
  if( zUuid==0 ) fossil_fatal("Unable to find UUID");
  zComment = db_text(0, "SELECT coalesce(ecomment,comment)"
                        "  FROM event WHERE objid=%d", rid);
  zUser = db_text(0, "SELECT coalesce(euser,user)"
                     "  FROM event WHERE objid=%d", rid);
  zDate = db_text(0, "SELECT datetime(mtime)"
                     "  FROM event WHERE objid=%d", rid);
  zColor = db_text("", "SELECT bgcolor"
                        "  FROM event WHERE objid=%d", rid);
  fPropagateColor = db_int(0, "SELECT tagtype FROM tagxref"
                              " WHERE rid=%d AND tagid=%d",
                              rid, TAG_BGCOLOR)==2;
  fNewPropagateColor = zNewColor && zNewColor[0]
                        ? fNewPropagateColor : fPropagateColor;
  db_prepare(&q,
     "SELECT tag.tagid FROM tagxref, tag"
     " WHERE tagxref.rid=%d AND tagtype>0 AND tagxref.tagid=tag.tagid",
     rid
  );
  while( db_step(&q)==SQLITE_ROW ){
    int tagid = db_column_int(&q, 0);

    if( tagid == TAG_CLOSED ){
      fHasClosed = 1;
    }else if( tagid==TAG_HIDDEN ){
      fHasHidden = 1;
    }else{
      continue;
    }
  }
  db_finalize(&q);
  blob_zero(&ctrl);
  zNow = date_in_standard_format(zChngTime && zChngTime[0] ? zChngTime : "now");
  blob_appendf(&ctrl, "D %s\n", zNow);
  init_newtags();
  if( zNewColor && zNewColor[0]
      && (fPropagateColor!=fNewPropagateColor
            || fossil_strcmp(zColor,zNewColor)!=0)
  ){
    add_color(
      mprintf("%s%s", (zNewColor[0]!='#' &&
        validate16(zNewColor,strlen(zNewColor)) &&
        (strlen(zNewColor)==6 || strlen(zNewColor)==3)) ? "#" : "",
        zNewColor
      ),
      fNewPropagateColor
    );
  }
  if( (zNewColor!=0 && zNewColor[0]==0) && (zColor && zColor[0] ) ){
    cancel_color();
  }
  if( fEditComment ){
    prepare_amend_comment(&comment, zComment, zUuid);
    zNewComment = blob_str(&comment);
  }else if( zComFile ){
    blob_zero(&comment);
    blob_read_from_file(&comment, zComFile);
    blob_to_utf8_no_bom(&comment, 1);
    zNewComment = blob_str(&comment);
  }
  if( zNewComment && zNewComment[0]
      && comment_compare(zComment,zNewComment)==0 ) add_comment(zNewComment);
  if( zNewDate && zNewDate[0] && fossil_strcmp(zDate,zNewDate)!=0 ){
    if( is_datetime(zNewDate) ){
      add_date(zNewDate);
    }else{
      fossil_fatal("Unsupported date format, use YYYY-MM-DD HH:MM:SS");
    }
  }
  if( zNewUser && zNewUser[0] && fossil_strcmp(zUser,zNewUser)!=0 ){
    add_user(zNewUser);
  }
  if( pzNewTags!=0 ){
    for(i=0; i<nTags; i++){
      if( pzNewTags[i] && pzNewTags[i][0] ) add_tag(pzNewTags[i]);
    }
    fossil_free(pzNewTags);
  }
  if( pzCancelTags!=0 ){
    for(i=0; i<nCancels; i++){
      if( pzCancelTags[i] && pzCancelTags[i][0] )
        cancel_tag(rid,pzCancelTags[i]);
    }
    fossil_free(pzCancelTags);
  }
  if( fHide && !fHasHidden ) hide_branch();
  if( fClose && !fHasClosed ) close_leaf(rid);
  if( zNewBranch && zNewBranch[0] ) change_branch(rid,zNewBranch);
  apply_newtags(&ctrl, rid, zUuid);
  show_common_info(rid, "uuid:", 1, 0);
}
Changes to src/json.c.
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
*/
void json_main_bootstrap(){
  cson_value * v;
  assert( (NULL == g.json.gc.v) &&
          "json_main_bootstrap() was called twice!" );

  g.json.timerId = fossil_timer_start();
  
  /* g.json.gc is our "garbage collector" - where we put JSON values
     which need a long lifetime but don't have a logical parent to put
     them in.
  */
  v = cson_value_new_array();
  g.json.gc.v = v;
  g.json.gc.a = cson_value_get_array(v);







|







695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
*/
void json_main_bootstrap(){
  cson_value * v;
  assert( (NULL == g.json.gc.v) &&
          "json_main_bootstrap() was called twice!" );

  g.json.timerId = fossil_timer_start();

  /* g.json.gc is our "garbage collector" - where we put JSON values
     which need a long lifetime but don't have a logical parent to put
     them in.
  */
  v = cson_value_new_array();
  g.json.gc.v = v;
  g.json.gc.a = cson_value_get_array(v);
Changes to src/json_timeline.c.
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
           "       (fid==0) AS isdel,"
           "       (SELECT name FROM filename WHERE fnid=mlink.fnid) AS name,"
           "       blob.uuid as uuid,"
           "       (SELECT uuid FROM blob WHERE rid=pid) as parent,"
           "       blob.size as size"
           "  FROM mlink, blob"
           " WHERE mid=%d AND pid!=fid"
           " AND blob.rid=fid "
           " ORDER BY name /*sort*/",
             rid
             );
  while( (SQLITE_ROW == db_step(&q)) ){
    cson_value * rowV = cson_value_new_object();
    cson_object * row = cson_value_get_object(rowV);
    int const isNew = db_column_int(&q,0);







|







324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
           "       (fid==0) AS isdel,"
           "       (SELECT name FROM filename WHERE fnid=mlink.fnid) AS name,"
           "       blob.uuid as uuid,"
           "       (SELECT uuid FROM blob WHERE rid=pid) as parent,"
           "       blob.size as size"
           "  FROM mlink, blob"
           " WHERE mid=%d AND pid!=fid"
           " AND blob.rid=fid AND NOT mlink.isaux"
           " ORDER BY name /*sort*/",
             rid
             );
  while( (SQLITE_ROW == db_step(&q)) ){
    cson_value * rowV = cson_value_new_object();
    cson_object * row = cson_value_get_object(rowV);
    int const isNew = db_column_int(&q,0);
Changes to src/linenoise.c.
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
#include <sys/types.h>
#include <sys/ioctl.h>
#include <unistd.h>
#include "linenoise.h"

#define LINENOISE_DEFAULT_HISTORY_MAX_LEN 100
#define LINENOISE_MAX_LINE 4096
static char *unsupported_term[] = {"dumb","cons25","emacs",NULL};
static linenoiseCompletionCallback *completionCallback = NULL;

static struct termios orig_termios; /* In order to restore at exit.*/
static int rawmode = 0; /* For atexit() function to check if restore is needed*/
static int mlmode = 0;  /* Multi line mode. Default is single line. */
static int atexit_registered = 0; /* Register atexit just 1 time. */
static int history_max_len = LINENOISE_DEFAULT_HISTORY_MAX_LEN;







|







116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
#include <sys/types.h>
#include <sys/ioctl.h>
#include <unistd.h>
#include "linenoise.h"

#define LINENOISE_DEFAULT_HISTORY_MAX_LEN 100
#define LINENOISE_MAX_LINE 4096
static const char *unsupported_term[] = {"dumb","cons25","emacs",NULL};
static linenoiseCompletionCallback *completionCallback = NULL;

static struct termios orig_termios; /* In order to restore at exit.*/
static int rawmode = 0; /* For atexit() function to check if restore is needed*/
static int mlmode = 0;  /* Multi line mode. Default is single line. */
static int atexit_registered = 0; /* Register atexit just 1 time. */
static int history_max_len = LINENOISE_DEFAULT_HISTORY_MAX_LEN;
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
static void linenoiseAtExit(void);
int linenoiseHistoryAdd(const char *line);
static void refreshLine(struct linenoiseState *l);

/* Debugging macro. */
#if 0
FILE *lndebug_fp = NULL;
#define lndebug(...) \
    do { \
        if (lndebug_fp == NULL) { \
            lndebug_fp = fopen("/tmp/lndebug.txt","a"); \
            fprintf(lndebug_fp, \
            "[%d %d %d] p: %d, rows: %d, rpos: %d, max: %d, oldmax: %d\n", \
            (int)l->len,(int)l->pos,(int)l->oldpos,plen,rows,rpos, \
            (int)l->maxrows,old_rows); \
        } \
        fprintf(lndebug_fp, ", " __VA_ARGS__); \
        fflush(lndebug_fp); \
    } while (0)
#else
#define lndebug(fmt, ...)
#endif

/* ======================= Low level terminal handling ====================== */

/* Set if to use or not the multi line mode. */
void linenoiseSetMultiLine(int ml) {
    mlmode = ml;







|








|



|







174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
static void linenoiseAtExit(void);
int linenoiseHistoryAdd(const char *line);
static void refreshLine(struct linenoiseState *l);

/* Debugging macro. */
#if 0
FILE *lndebug_fp = NULL;
#define lndebug(fmt, arg1) \
    do { \
        if (lndebug_fp == NULL) { \
            lndebug_fp = fopen("/tmp/lndebug.txt","a"); \
            fprintf(lndebug_fp, \
            "[%d %d %d] p: %d, rows: %d, rpos: %d, max: %d, oldmax: %d\n", \
            (int)l->len,(int)l->pos,(int)l->oldpos,plen,rows,rpos, \
            (int)l->maxrows,old_rows); \
        } \
        fprintf(lndebug_fp, ", " fmt, arg1); \
        fflush(lndebug_fp); \
    } while (0)
#else
#define lndebug(fmt, arg1)
#endif

/* ======================= Low level terminal handling ====================== */

/* Set if to use or not the multi line mode. */
void linenoiseSetMultiLine(int ml) {
    mlmode = ml;
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
        lndebug("go down %d", old_rows-rpos);
        snprintf(seq,64,"\x1b[%dB", old_rows-rpos);
        abAppend(&ab,seq,strlen(seq));
    }

    /* Now for every row clear it, go up. */
    for (j = 0; j < old_rows-1; j++) {
        lndebug("clear+up");
        snprintf(seq,64,"\r\x1b[0K\x1b[1A");
        abAppend(&ab,seq,strlen(seq));
    }

    /* Clean the top line. */
    lndebug("clear");
    snprintf(seq,64,"\r\x1b[0K");
    abAppend(&ab,seq,strlen(seq));

    /* Write the prompt and the current buffer content */
    abAppend(&ab,l->prompt,strlen(l->prompt));
    abAppend(&ab,l->buf,l->len);

    /* If we are at the very end of the screen with our prompt, we need to
     * emit a newline and move the prompt to the first column. */
    if (l->pos &&
        l->pos == l->len &&
        (l->pos+plen) % l->cols == 0)
    {
        lndebug("<newline>");
        abAppend(&ab,"\n",1);
        snprintf(seq,64,"\r");
        abAppend(&ab,seq,strlen(seq));
        rows++;
        if (rows > (int)l->maxrows) l->maxrows = rows;
    }








|





|













|







524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
        lndebug("go down %d", old_rows-rpos);
        snprintf(seq,64,"\x1b[%dB", old_rows-rpos);
        abAppend(&ab,seq,strlen(seq));
    }

    /* Now for every row clear it, go up. */
    for (j = 0; j < old_rows-1; j++) {
        lndebug("clear+up", 0);
        snprintf(seq,64,"\r\x1b[0K\x1b[1A");
        abAppend(&ab,seq,strlen(seq));
    }

    /* Clean the top line. */
    lndebug("clear", 0);
    snprintf(seq,64,"\r\x1b[0K");
    abAppend(&ab,seq,strlen(seq));

    /* Write the prompt and the current buffer content */
    abAppend(&ab,l->prompt,strlen(l->prompt));
    abAppend(&ab,l->buf,l->len);

    /* If we are at the very end of the screen with our prompt, we need to
     * emit a newline and move the prompt to the first column. */
    if (l->pos &&
        l->pos == l->len &&
        (l->pos+plen) % l->cols == 0)
    {
        lndebug("<newline>", 0);
        abAppend(&ab,"\n",1);
        snprintf(seq,64,"\r");
        abAppend(&ab,seq,strlen(seq));
        rows++;
        if (rows > (int)l->maxrows) l->maxrows = rows;
    }

572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
    lndebug("set col %d", 1+col);
    if (col)
        snprintf(seq,64,"\r\x1b[%dC", col);
    else
        snprintf(seq,64,"\r");
    abAppend(&ab,seq,strlen(seq));

    lndebug("\n");
    l->oldpos = l->pos;

    if (write(fd,ab.b,ab.len) == -1) {} /* Can't recover from write error. */
    abFree(&ab);
}

/* Calls the two low level functions refreshSingleLine() or







|







572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
    lndebug("set col %d", 1+col);
    if (col)
        snprintf(seq,64,"\r\x1b[%dC", col);
    else
        snprintf(seq,64,"\r");
    abAppend(&ab,seq,strlen(seq));

    lndebug("\n", 0);
    l->oldpos = l->pos;

    if (write(fd,ab.b,ab.len) == -1) {} /* Can't recover from write error. */
    abFree(&ab);
}

/* Calls the two low level functions refreshSingleLine() or
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
        nread = read(STDIN_FILENO,&c,1);
        if (nread <= 0) continue;
        memmove(quit,quit+1,sizeof(quit)-1); /* shift string to left. */
        quit[sizeof(quit)-1] = c; /* Insert current char on the right. */
        if (memcmp(quit,"quit",sizeof(quit)) == 0) break;

        printf("'%c' %02x (%d) (type quit to exit)\n",
            isprint(c) ? c : '?', (int)c, (int)c);
        printf("\r"); /* Go left edge manually, we are in raw mode. */
        fflush(stdout);
    }
    disableRawMode(STDIN_FILENO);
}

/* This function calls the line editing function linenoiseEdit() using







|







918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
        nread = read(STDIN_FILENO,&c,1);
        if (nread <= 0) continue;
        memmove(quit,quit+1,sizeof(quit)-1); /* shift string to left. */
        quit[sizeof(quit)-1] = c; /* Insert current char on the right. */
        if (memcmp(quit,"quit",sizeof(quit)) == 0) break;

        printf("'%c' %02x (%d) (type quit to exit)\n",
            isprint((int)c) ? c : '?', (int)c, (int)c);
        printf("\r"); /* Go left edge manually, we are in raw mode. */
        fflush(stdout);
    }
    disableRawMode(STDIN_FILENO);
}

/* This function calls the line editing function linenoiseEdit() using
Changes to src/login.c.
214
215
216
217
218
219
220
221


222
223
224
225
226
227
228
  char *zSha1Pw = sha1_shared_secret(zPasswd, zUsername, 0);
  int const uid =
      db_int(0,
             "SELECT uid FROM user"
             " WHERE login=%Q"
             "   AND length(cap)>0 AND length(pw)>0"
             "   AND login NOT IN ('anonymous','nobody','developer','reader')"
             "   AND (pw=%Q OR (length(pw)<>40 AND pw=%Q))",


             zUsername, zSha1Pw, zPasswd
             );
  free(zSha1Pw);
  return uid;
}

/*







|
>
>







214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
  char *zSha1Pw = sha1_shared_secret(zPasswd, zUsername, 0);
  int const uid =
      db_int(0,
             "SELECT uid FROM user"
             " WHERE login=%Q"
             "   AND length(cap)>0 AND length(pw)>0"
             "   AND login NOT IN ('anonymous','nobody','developer','reader')"
             "   AND (pw=%Q OR (length(pw)<>40 AND pw=%Q))"
             "   AND (info NOT LIKE '%%expires 20%%'"
             "      OR substr(info,instr(lower(info),'expires')+8,10)>datetime('now'))",
             zUsername, zSha1Pw, zPasswd
             );
  free(zSha1Pw);
  return uid;
}

/*
Changes to src/main.c.
874
875
876
877
878
879
880


































881
882
883
884
885
886
887
      zReturn = g.argv[i+hasArg];
      remove_from_argv(i, 1+hasArg);
      break;
    }
  }
  return zReturn;
}



































/*
** Look for a repository command-line option.  If present, [re-]cache it in
** the global state and return the new pointer, freeing any previous value.
** If absent and there is no cached value, return NULL.
*/
const char *find_repository_option(){







>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>







874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
      zReturn = g.argv[i+hasArg];
      remove_from_argv(i, 1+hasArg);
      break;
    }
  }
  return zReturn;
}

/*
** Look for multiple occurrences of a command-line option with the
** corresponding argument.
**
** Return a malloc allocated array of pointers to the arguments.
**
** pnUsedArgs is used to store the number of matched arguments.
**
** Caller is responsible to free allocated memory.
*/
const char **find_repeatable_option(
  const char *zLong,
  const char *zShort,
  int *pnUsedArgs
){
  const char *zOption;
  const char **pzArgs = 0;
  int nAllocArgs = 0;
  int nUsedArgs = 0;

  while( (zOption = find_option(zLong, zShort, 1))!=0 ){
    if( pzArgs==0 && nAllocArgs==0 ){
      nAllocArgs = 1;
      pzArgs = fossil_malloc( nAllocArgs*sizeof(pzArgs[0]) );
    }else if( nAllocArgs<=nUsedArgs ){
      nAllocArgs = nAllocArgs*2;
      pzArgs = fossil_realloc( pzArgs, nAllocArgs*sizeof(pzArgs[0]) );
    }
    pzArgs[nUsedArgs++] = zOption;
  }
  *pnUsedArgs = nUsedArgs;
  return pzArgs;
}

/*
** Look for a repository command-line option.  If present, [re-]cache it in
** the global state and return the new pointer, freeing any previous value.
** If absent and there is no cached value, return NULL.
*/
const char *find_repository_option(){
1024
1025
1026
1027
1028
1029
1030



1031
1032
1033
1034
1035
1036
1037
    fossil_print("zlib %s, loaded %s\n", ZLIB_VERSION, zlibVersion());
#endif
#if defined(FOSSIL_ENABLE_SSL)
    fossil_print("SSL (%s)\n", SSLeay_version(SSLEAY_VERSION));
#endif
#if defined(FOSSIL_ENABLE_LEGACY_MV_RM)
    fossil_print("LEGACY_MV_RM\n");



#endif
#if defined(FOSSIL_ENABLE_TH1_DOCS)
    fossil_print("TH1_DOCS\n");
#endif
#if defined(FOSSIL_ENABLE_TH1_HOOKS)
    fossil_print("TH1_HOOKS\n");
#endif







>
>
>







1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
    fossil_print("zlib %s, loaded %s\n", ZLIB_VERSION, zlibVersion());
#endif
#if defined(FOSSIL_ENABLE_SSL)
    fossil_print("SSL (%s)\n", SSLeay_version(SSLEAY_VERSION));
#endif
#if defined(FOSSIL_ENABLE_LEGACY_MV_RM)
    fossil_print("LEGACY_MV_RM\n");
#endif
#if defined(FOSSIL_ENABLE_EXEC_REL_PATHS)
    fossil_print("EXEC_REL_PATHS\n");
#endif
#if defined(FOSSIL_ENABLE_TH1_DOCS)
    fossil_print("TH1_DOCS\n");
#endif
#if defined(FOSSIL_ENABLE_TH1_HOOKS)
    fossil_print("TH1_HOOKS\n");
#endif
Changes to src/main.mk.
476
477
478
479
480
481
482
483


484
485
486
487
488
489
490
                 -DSQLITE_ENABLE_LOCKING_STYLE=0 \
                 -DSQLITE_THREADSAFE=0 \
                 -DSQLITE_DEFAULT_FILE_FORMAT=4 \
                 -DSQLITE_OMIT_DEPRECATED \
                 -DSQLITE_ENABLE_EXPLAIN_COMMENTS \
                 -DSQLITE_ENABLE_FTS4 \
                 -DSQLITE_ENABLE_FTS3_PARENTHESIS \
                 -DSQLITE_ENABLE_DBSTAT_VTAB



# Setup the options used to compile the included SQLite shell.
SHELL_OPTIONS = -Dmain=sqlite3_shell \
                -DSQLITE_OMIT_LOAD_EXTENSION=1 \
                -DUSE_SYSTEM_SQLITE=$(USE_SYSTEM_SQLITE) \
                -DSQLITE_SHELL_DBNAME_PROC=fossil_open








|
>
>







476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
                 -DSQLITE_ENABLE_LOCKING_STYLE=0 \
                 -DSQLITE_THREADSAFE=0 \
                 -DSQLITE_DEFAULT_FILE_FORMAT=4 \
                 -DSQLITE_OMIT_DEPRECATED \
                 -DSQLITE_ENABLE_EXPLAIN_COMMENTS \
                 -DSQLITE_ENABLE_FTS4 \
                 -DSQLITE_ENABLE_FTS3_PARENTHESIS \
                 -DSQLITE_ENABLE_DBSTAT_VTAB \
                 -DSQLITE_ENABLE_JSON1 \
                 -DSQLITE_ENABLE_FTS5

# Setup the options used to compile the included SQLite shell.
SHELL_OPTIONS = -Dmain=sqlite3_shell \
                -DSQLITE_OMIT_LOAD_EXTENSION=1 \
                -DUSE_SYSTEM_SQLITE=$(USE_SYSTEM_SQLITE) \
                -DSQLITE_SHELL_DBNAME_PROC=fossil_open

Changes to src/makemake.tcl.
159
160
161
162
163
164
165


166
167
168
169
170
171
172
  -DSQLITE_THREADSAFE=0
  -DSQLITE_DEFAULT_FILE_FORMAT=4
  -DSQLITE_OMIT_DEPRECATED
  -DSQLITE_ENABLE_EXPLAIN_COMMENTS
  -DSQLITE_ENABLE_FTS4
  -DSQLITE_ENABLE_FTS3_PARENTHESIS
  -DSQLITE_ENABLE_DBSTAT_VTAB


}
#lappend SQLITE_OPTIONS -DSQLITE_ENABLE_FTS3=1
#lappend SQLITE_OPTIONS -DSQLITE_ENABLE_STAT4
#lappend SQLITE_OPTIONS -DSQLITE_WIN32_NO_ANSI
#lappend SQLITE_OPTIONS -DSQLITE_WINNT_MAX_PATH_CHARS=4096

# Options used to compile the included SQLite shell.







>
>







159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
  -DSQLITE_THREADSAFE=0
  -DSQLITE_DEFAULT_FILE_FORMAT=4
  -DSQLITE_OMIT_DEPRECATED
  -DSQLITE_ENABLE_EXPLAIN_COMMENTS
  -DSQLITE_ENABLE_FTS4
  -DSQLITE_ENABLE_FTS3_PARENTHESIS
  -DSQLITE_ENABLE_DBSTAT_VTAB
  -DSQLITE_ENABLE_JSON1
  -DSQLITE_ENABLE_FTS5
}
#lappend SQLITE_OPTIONS -DSQLITE_ENABLE_FTS3=1
#lappend SQLITE_OPTIONS -DSQLITE_ENABLE_STAT4
#lappend SQLITE_OPTIONS -DSQLITE_WIN32_NO_ANSI
#lappend SQLITE_OPTIONS -DSQLITE_WINNT_MAX_PATH_CHARS=4096

# Options used to compile the included SQLite shell.
454
455
456
457
458
459
460




461
462
463
464
465
466
467
# This file is automatically generated.  Instead of editing this
# file, edit "makemake.tcl" then run "tclsh makemake.tcl"
# to regenerate this file.
#
# This is a makefile for use on Cygwin/Darwin/FreeBSD/Linux/Windows using
# MinGW or MinGW-w64.
#





#### Select one of MinGW, MinGW-w64 (32-bit) or MinGW-w64 (64-bit) compilers.
#    By default, this is an empty string (i.e. use the native compiler).
#
PREFIX =
# PREFIX = mingw32-
# PREFIX = i686-pc-mingw32-







>
>
>
>







456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
# This file is automatically generated.  Instead of editing this
# file, edit "makemake.tcl" then run "tclsh makemake.tcl"
# to regenerate this file.
#
# This is a makefile for use on Cygwin/Darwin/FreeBSD/Linux/Windows using
# MinGW or MinGW-w64.
#
# Some of the special options which can be passed to make
#   USE_WINDOWS=1    if building under a windows command prompt
#   X64=1            if using an unprefixed 64-bit mingw compiler
#

#### Select one of MinGW, MinGW-w64 (32-bit) or MinGW-w64 (64-bit) compilers.
#    By default, this is an empty string (i.e. use the native compiler).
#
PREFIX =
# PREFIX = mingw32-
# PREFIX = i686-pc-mingw32-
497
498
499
500
501
502
503




504
505
506
507
508
509
510
#
# FOSSIL_ENABLE_SSL = 1

#### Automatically build OpenSSL when building Fossil (causes rebuild
#    issues when building incrementally).
#
# FOSSIL_BUILD_SSL = 1





#### Enable legacy treatment of mv/rm (skip checkout files)
#
# FOSSIL_ENABLE_LEGACY_MV_RM = 1

#### Enable TH1 scripts in embedded documentation files
#







>
>
>
>







503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
#
# FOSSIL_ENABLE_SSL = 1

#### Automatically build OpenSSL when building Fossil (causes rebuild
#    issues when building incrementally).
#
# FOSSIL_BUILD_SSL = 1

#### Enable relative paths in external diff/gdiff
#
# FOSSIL_ENABLE_EXEC_REL_PATHS = 1

#### Enable legacy treatment of mv/rm (skip checkout files)
#
# FOSSIL_ENABLE_LEGACY_MV_RM = 1

#### Enable TH1 scripts in embedded documentation files
#
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661








662
663
664
665
666
667
668

#### C Compile and options for use in building executables that
#    will run on the target platform.  This is usually the same
#    as BCC, unless you are cross-compiling.  This C compiler builds
#    the finished binary for fossil.  The BCC compiler above is used
#    for building intermediate code-generator tools.
#
TCC = $(PREFIX)gcc -Os -Wall

#### When not using the miniz compression library, zlib is required.
#
ifndef FOSSIL_ENABLE_MINIZ
TCC += -L$(ZLIBDIR) -I$(ZINCDIR)
endif

#### Add the necessary command line options to build with debugging
#    symbols, if enabled.
#
ifdef FOSSIL_ENABLE_SYMBOLS
TCC += -g








endif

#### Compile resources for use in building executables that will run
#    on the target platform.
#
RCC = $(PREFIX)windres -I$(SRCDIR)








|
<
<
<
<
<
<






>
>
>
>
>
>
>
>







652
653
654
655
656
657
658
659






660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680

#### C Compile and options for use in building executables that
#    will run on the target platform.  This is usually the same
#    as BCC, unless you are cross-compiling.  This C compiler builds
#    the finished binary for fossil.  The BCC compiler above is used
#    for building intermediate code-generator tools.
#
TCC = $(PREFIX)gcc -Wall







#### Add the necessary command line options to build with debugging
#    symbols, if enabled.
#
ifdef FOSSIL_ENABLE_SYMBOLS
TCC += -g
else
TCC += -Os
endif

#### When not using the miniz compression library, zlib is required.
#
ifndef FOSSIL_ENABLE_MINIZ
TCC += -L$(ZLIBDIR) -I$(ZINCDIR)
endif

#### Compile resources for use in building executables that will run
#    on the target platform.
#
RCC = $(PREFIX)windres -I$(SRCDIR)

700
701
702
703
704
705
706






707
708
709
710
711
712
713
endif

# With HTTPS support
ifdef FOSSIL_ENABLE_SSL
TCC += -DFOSSIL_ENABLE_SSL=1
RCC += -DFOSSIL_ENABLE_SSL=1
endif







# With legacy treatment of mv/rm
ifdef FOSSIL_ENABLE_LEGACY_MV_RM
TCC += -DFOSSIL_ENABLE_LEGACY_MV_RM=1
RCC += -DFOSSIL_ENABLE_LEGACY_MV_RM=1
endif








>
>
>
>
>
>







712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
endif

# With HTTPS support
ifdef FOSSIL_ENABLE_SSL
TCC += -DFOSSIL_ENABLE_SSL=1
RCC += -DFOSSIL_ENABLE_SSL=1
endif

# With relative paths in external diff/gdiff
ifdef FOSSIL_ENABLE_EXEC_REL_PATHS
TCC += -DFOSSIL_ENABLE_EXEC_REL_PATHS=1
RCC += -DFOSSIL_ENABLE_EXEC_REL_PATHS=1
endif

# With legacy treatment of mv/rm
ifdef FOSSIL_ENABLE_LEGACY_MV_RM
TCC += -DFOSSIL_ENABLE_LEGACY_MV_RM=1
RCC += -DFOSSIL_ENABLE_LEGACY_MV_RM=1
endif

1329
1330
1331
1332
1333
1334
1335





1336
1337
1338
1339
1340
1341
1342
FOSSIL_BUILD_ZLIB = 1
!endif

# Link everything except SQLite dynamically?
!ifndef FOSSIL_DYNAMIC_BUILD
FOSSIL_DYNAMIC_BUILD = 0
!endif






# Enable the JSON API?
!ifndef FOSSIL_ENABLE_JSON
FOSSIL_ENABLE_JSON = 0
!endif

# Enable legacy treatment of the mv/rm commands?







>
>
>
>
>







1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
FOSSIL_BUILD_ZLIB = 1
!endif

# Link everything except SQLite dynamically?
!ifndef FOSSIL_DYNAMIC_BUILD
FOSSIL_DYNAMIC_BUILD = 0
!endif

# Enable relative paths in external diff/gdiff?
!ifndef FOSSIL_ENABLE_EXEC_REL_PATHS
FOSSIL_ENABLE_EXEC_REL_PATHS = 0
!endif

# Enable the JSON API?
!ifndef FOSSIL_ENABLE_JSON
FOSSIL_ENABLE_JSON = 0
!endif

# Enable legacy treatment of the mv/rm commands?
1547
1548
1549
1550
1551
1552
1553





1554
1555
1556
1557
1558
1559
1560

!if $(FOSSIL_ENABLE_SSL)!=0
TCC       = $(TCC) /DFOSSIL_ENABLE_SSL=1
RCC       = $(RCC) /DFOSSIL_ENABLE_SSL=1
LIBS      = $(LIBS) $(SSLLIB)
LIBDIR    = $(LIBDIR) /LIBPATH:$(SSLLIBDIR)
!endif






!if $(FOSSIL_ENABLE_LEGACY_MV_RM)!=0
TCC       = $(TCC) /DFOSSIL_ENABLE_LEGACY_MV_RM=1
RCC       = $(RCC) /DFOSSIL_ENABLE_LEGACY_MV_RM=1
!endif

!if $(FOSSIL_ENABLE_TH1_DOCS)!=0







>
>
>
>
>







1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588

!if $(FOSSIL_ENABLE_SSL)!=0
TCC       = $(TCC) /DFOSSIL_ENABLE_SSL=1
RCC       = $(RCC) /DFOSSIL_ENABLE_SSL=1
LIBS      = $(LIBS) $(SSLLIB)
LIBDIR    = $(LIBDIR) /LIBPATH:$(SSLLIBDIR)
!endif

!if $(FOSSIL_ENABLE_EXEC_REL_PATHS)!=0
TCC       = $(TCC) /DFOSSIL_ENABLE_EXEC_REL_PATHS=1
RCC       = $(RCC) /DFOSSIL_ENABLE_EXEC_REL_PATHS=1
!endif

!if $(FOSSIL_ENABLE_LEGACY_MV_RM)!=0
TCC       = $(TCC) /DFOSSIL_ENABLE_LEGACY_MV_RM=1
RCC       = $(RCC) /DFOSSIL_ENABLE_LEGACY_MV_RM=1
!endif

!if $(FOSSIL_ENABLE_TH1_DOCS)!=0
Changes to src/markdown.md.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
# Markdown formatting rules

In addition to its native Wiki formatting syntax, Fossil supports Markdown syntax as specified by 
[John Gruber's original Markdown implementation](http://daringfireball.net/projects/markdown/). 
For lots of examples - not repeated here - please refer to its 
[syntax description](http://daringfireball.net/projects/markdown/syntax), of which the page you
are reading is an extract.

This page itself uses Markdown formatting.

## Summary

  - Block elements

      * A **paragraph** is a group of consecutive lines. Paragraphs are separated by blank lines.

      * A **Header** is a line of text underlined with equal signs or hyphens, or prefixed by a 
        number of hash marks.

      * **Block quotes** are blocks of text prefixed by '>'.

      * **Ordered list** items are prefixed by a number and a period. **Unordered list** items
        are prefixed by a hyphen, asterisk or plus sign. Prefix and item text are separated by
        whitespace. 

      * **Code blocks** are formed by lines of text (possibly including empty lines) prefixed by
        at least 4 spaces or a tab.

      * A **horizontal rule** is a line consisting of 3 or more asterisks, hyphens or underscores,
        with optional whitespace between them.

  - Span elements

      * 3 types of **links** exist:

        - **automatic links** are URLs or email addresses enclosed in angle brackets
          ('<' and '>'), and are displayed as such.

        - **inline links** consist of the displayed link text in square brackets ('[' and ']'), 
          followed by the link target in parentheses. 

        - **reference links** separate _link instance_ from _link definition_. A link instance 
          consists of the displayed link text in square brackets, followed by a link definition name 
          in square brackets. 
          The corresponding link definition can occur anywhere on the page, and consists
          of the link definition name in square brackets followed by a colon, whitespace and the 
          link target.

      * **Emphasis** can be given by wrapping text in one or two asterisks or underscores - use
        one for HTML `<em>`, and two for `<strong>` emphasis.

      * A **code span** is text wrapped in backticks ('`').

      * **Images** use a syntax much like inline or reference links, but with alt attribute text 
        ('img alt=...') instead of link text, and the first pair of square
        brackets in an image instance prefixed by an exclamation mark.

  - **Inline HTML** is mostly interpreted automatically.

  - **Escaping** Markdown punctuation characters is done by prefixing them by a backslash ('\\').



|
|
|











|






|














|
|

|
|
|

|







|







1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
# Markdown formatting rules

In addition to its native Wiki formatting syntax, Fossil supports Markdown syntax as specified by
[John Gruber's original Markdown implementation](http://daringfireball.net/projects/markdown/).
For lots of examples - not repeated here - please refer to its
[syntax description](http://daringfireball.net/projects/markdown/syntax), of which the page you
are reading is an extract.

This page itself uses Markdown formatting.

## Summary

  - Block elements

      * A **paragraph** is a group of consecutive lines. Paragraphs are separated by blank lines.

      * A **Header** is a line of text underlined with equal signs or hyphens, or prefixed by a
        number of hash marks.

      * **Block quotes** are blocks of text prefixed by '>'.

      * **Ordered list** items are prefixed by a number and a period. **Unordered list** items
        are prefixed by a hyphen, asterisk or plus sign. Prefix and item text are separated by
        whitespace.

      * **Code blocks** are formed by lines of text (possibly including empty lines) prefixed by
        at least 4 spaces or a tab.

      * A **horizontal rule** is a line consisting of 3 or more asterisks, hyphens or underscores,
        with optional whitespace between them.

  - Span elements

      * 3 types of **links** exist:

        - **automatic links** are URLs or email addresses enclosed in angle brackets
          ('<' and '>'), and are displayed as such.

        - **inline links** consist of the displayed link text in square brackets ('[' and ']'),
          followed by the link target in parentheses.

        - **reference links** separate _link instance_ from _link definition_. A link instance
          consists of the displayed link text in square brackets, followed by a link definition name
          in square brackets.
          The corresponding link definition can occur anywhere on the page, and consists
          of the link definition name in square brackets followed by a colon, whitespace and the
          link target.

      * **Emphasis** can be given by wrapping text in one or two asterisks or underscores - use
        one for HTML `<em>`, and two for `<strong>` emphasis.

      * A **code span** is text wrapped in backticks ('`').

      * **Images** use a syntax much like inline or reference links, but with alt attribute text
        ('img alt=...') instead of link text, and the first pair of square
        brackets in an image instance prefixed by an exclamation mark.

  - **Inline HTML** is mostly interpreted automatically.

  - **Escaping** Markdown punctuation characters is done by prefixing them by a backslash ('\\').

86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
level.

### Block quotes

Not every line in a paragraph needs to be prefixed by '>' in order to make it a block quote,
only the first line.

Block quoted paragraphs can be nested by using multiple '>' characters as prefix. 

Within a block quote, Markdown formatting (e.g. lists, emphasis) still works as normal.

### Lists

A list item prefix need not occur first on its line; up to 3 leading spaces are allowed
(4 spaces would make a code block out of the following text).

For unordered lists, asterisks, hyphens and plus signs can be used interchangeably.

For ordered lists, arbitrary numbers can be used as part of an item prefix; the items will be 
renumbered during rendering. However, future implementations may demand that the number used 
for the first item in a list indicates an offset to be used for subsequent items.

For list items spanning multiple lines, subsequent lines can be indented using an arbitrary amount
of whitespace.

List items will be wrapped in HTML `<p>` tags if they are separated by blank lines.

A list item may span multiple paragraphs. At least the first line of each such paragraph must 
be indented using at least 4 spaces or a tab character.

Block quotes within list items must have their '>' delimiters indented using 4 up to 7 spaces.

Code blocks within list items need to be indented _twice_, that is, using 8 spaces or 2 tab
characters.








|










|
|







|







86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
level.

### Block quotes

Not every line in a paragraph needs to be prefixed by '>' in order to make it a block quote,
only the first line.

Block quoted paragraphs can be nested by using multiple '>' characters as prefix.

Within a block quote, Markdown formatting (e.g. lists, emphasis) still works as normal.

### Lists

A list item prefix need not occur first on its line; up to 3 leading spaces are allowed
(4 spaces would make a code block out of the following text).

For unordered lists, asterisks, hyphens and plus signs can be used interchangeably.

For ordered lists, arbitrary numbers can be used as part of an item prefix; the items will be
renumbered during rendering. However, future implementations may demand that the number used
for the first item in a list indicates an offset to be used for subsequent items.

For list items spanning multiple lines, subsequent lines can be indented using an arbitrary amount
of whitespace.

List items will be wrapped in HTML `<p>` tags if they are separated by blank lines.

A list item may span multiple paragraphs. At least the first line of each such paragraph must
be indented using at least 4 spaces or a tab character.

Block quotes within list items must have their '>' delimiters indented using 4 up to 7 spaces.

Code blocks within list items need to be indented _twice_, that is, using 8 spaces or 2 tab
characters.

129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163

Regular Markdown syntax is not processed within code blocks.

### Links

#### Automatic links

When rendering automatic links to email addresses, HTML encoding obfuscation is used to 
prevent some spambots from harvesting.

#### Inline links

Links to resources on the same server can use relative paths (i.e. can start with a '/').

An optional title for the link (e.g. to have mouseover text in the browser) may be given behind 
the link target but within the parentheses, in single and double quotes, and separated from the 
link target by whitespace. 

#### Reference links

> Each reference link consists of 
>
>   - one or more _link instances_ at appropriate locations in the page text
>   - a single _link definition_ at an arbitrary location on the page
> 
> During rendering, each link instance is resolved, and the corresponding definition is
> filled in. No separate link definition clauses occur in the rendered output.
> 
> There are 3 fields involved in link instances and definitions:
>
>   - link text (i.e. the text that is displayed at the resulting link)
>   - link definition name (i.e. an unique ID binding link instances to link definition)
>   - link target (a target URL for the link)

Multiple link instances may reference the same link definition using its link definition







|






|
|
|



|



|


|







129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163

Regular Markdown syntax is not processed within code blocks.

### Links

#### Automatic links

When rendering automatic links to email addresses, HTML encoding obfuscation is used to
prevent some spambots from harvesting.

#### Inline links

Links to resources on the same server can use relative paths (i.e. can start with a '/').

An optional title for the link (e.g. to have mouseover text in the browser) may be given behind
the link target but within the parentheses, in single and double quotes, and separated from the
link target by whitespace.

#### Reference links

> Each reference link consists of
>
>   - one or more _link instances_ at appropriate locations in the page text
>   - a single _link definition_ at an arbitrary location on the page
>
> During rendering, each link instance is resolved, and the corresponding definition is
> filled in. No separate link definition clauses occur in the rendered output.
>
> There are 3 fields involved in link instances and definitions:
>
>   - link text (i.e. the text that is displayed at the resulting link)
>   - link definition name (i.e. an unique ID binding link instances to link definition)
>   - link target (a target URL for the link)

Multiple link instances may reference the same link definition using its link definition
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
side of emphasis start or end punctuation characters.

### Code spans

To include a literal backtick character in a code span, use multiple backticks as opening and
closing delimiters.

Whitespace may exist immediately after the opening delimiter and before the closing delimiter 
of a code span, to allow for code fragments starting or ending with a backtick.

Within a code span - like within a code block - angle brackets and ampersands are automatically encoded to make including
HTML fragments easier.

### Images

If necessary, HTML must be used to specify image dimensions. Markdown has no provision for this.

### Inline HTML

Start and end tags of 
a HTML block level construct (`<div>`, `<table>` etc) must be separated from surrounding
context using blank lines, and must both occur at the start of a line.

No extra unwanted `<p>` HTML tags are added around HTML block level tags.

Markdown formatting within HTML block level tags is not processed; however, formatting within 
span level tags (e.g. `<mark>`) is processed normally.

### Escaping Markdown punctuation

The following punctuation characters can be escaped using backslash:

  - \\   backslash







|











|





|







195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
side of emphasis start or end punctuation characters.

### Code spans

To include a literal backtick character in a code span, use multiple backticks as opening and
closing delimiters.

Whitespace may exist immediately after the opening delimiter and before the closing delimiter
of a code span, to allow for code fragments starting or ending with a backtick.

Within a code span - like within a code block - angle brackets and ampersands are automatically encoded to make including
HTML fragments easier.

### Images

If necessary, HTML must be used to specify image dimensions. Markdown has no provision for this.

### Inline HTML

Start and end tags of
a HTML block level construct (`<div>`, `<table>` etc) must be separated from surrounding
context using blank lines, and must both occur at the start of a line.

No extra unwanted `<p>` HTML tags are added around HTML block level tags.

Markdown formatting within HTML block level tags is not processed; however, formatting within
span level tags (e.g. `<mark>`) is processed normally.

### Escaping Markdown punctuation

The following punctuation characters can be escaped using backslash:

  - \\   backslash
Changes to src/merge.c.
225
226
227
228
229
230
231






232
233
234
235
236
237
238
  verify_all_options();
  db_must_be_within_tree();
  if( zBinGlob==0 ) zBinGlob = db_get("binary-glob",0);
  vid = db_lget_int("checkout", 0);
  if( vid==0 ){
    fossil_fatal("nothing is checked out");
  }







  /* Find mid, the artifactID of the version to be merged into the current
  ** check-out */
  if( g.argc==3 ){
    /* Mid is specified as an argument on the command-line */
    mid = name_to_typed_rid(g.argv[2], "ci");
    if( mid==0 || !is_a_version(mid) ){







>
>
>
>
>
>







225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
  verify_all_options();
  db_must_be_within_tree();
  if( zBinGlob==0 ) zBinGlob = db_get("binary-glob",0);
  vid = db_lget_int("checkout", 0);
  if( vid==0 ){
    fossil_fatal("nothing is checked out");
  }
  if( !dryRunFlag ){
    if( autosync_loop(SYNC_PULL + SYNC_VERBOSE*verboseFlag,
                      db_get_int("autosync-tries", 1)) ){
      fossil_fatal("Cannot proceed with merge");
    }
  }

  /* Find mid, the artifactID of the version to be merged into the current
  ** check-out */
  if( g.argc==3 ){
    /* Mid is specified as an argument on the command-line */
    mid = name_to_typed_rid(g.argv[2], "ci");
    if( mid==0 || !is_a_version(mid) ){
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
    }else{
      fossil_print("MERGE %s\n", zName);
    }
    if( islinkv || islinkm /* || file_wd_islink(zFullPath) */ ){
      fossil_print("***** Cannot merge symlink %s\n", zName);
      nConflict++;
    }else{
      undo_save(zName);
      zFullPath = mprintf("%s/%s", g.zLocalRoot, zName);
      content_get(ridp, &p);
      content_get(ridm, &m);
      if( isBinary ){
        rc = -1;
        blob_zero(&r);
      }else{







|







596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
    }else{
      fossil_print("MERGE %s\n", zName);
    }
    if( islinkv || islinkm /* || file_wd_islink(zFullPath) */ ){
      fossil_print("***** Cannot merge symlink %s\n", zName);
      nConflict++;
    }else{
      if( !dryRunFlag ) undo_save(zName);
      zFullPath = mprintf("%s/%s", g.zLocalRoot, zName);
      content_get(ridp, &p);
      content_get(ridm, &m);
      if( isBinary ){
        rc = -1;
        blob_zero(&r);
      }else{
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
    int chnged = db_column_int(&q, 2);
    /* Delete the file idv */
    fossil_print("DELETE %s\n", zName);
    if( chnged ){
      fossil_warning("WARNING: local edits lost for %s\n", zName);
      nConflict++;
    }
    undo_save(zName);
    db_multi_exec(
      "UPDATE vfile SET deleted=1 WHERE id=%d", idv
    );
    if( !dryRunFlag ){
      char *zFullPath = mprintf("%s%s", g.zLocalRoot, zName);
      file_delete(zFullPath);
      free(zFullPath);







|







647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
    int chnged = db_column_int(&q, 2);
    /* Delete the file idv */
    fossil_print("DELETE %s\n", zName);
    if( chnged ){
      fossil_warning("WARNING: local edits lost for %s\n", zName);
      nConflict++;
    }
    if( !dryRunFlag ) undo_save(zName);
    db_multi_exec(
      "UPDATE vfile SET deleted=1 WHERE id=%d", idv
    );
    if( !dryRunFlag ){
      char *zFullPath = mprintf("%s%s", g.zLocalRoot, zName);
      file_delete(zFullPath);
      free(zFullPath);
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
    " WHERE idv>0 AND idp>0 AND idm>0 AND fnp=fn AND fnm!=fnp"
  );
  while( db_step(&q)==SQLITE_ROW ){
    int idv = db_column_int(&q, 0);
    const char *zOldName = db_column_text(&q, 1);
    const char *zNewName = db_column_text(&q, 2);
    fossil_print("RENAME %s -> %s\n", zOldName, zNewName);
    undo_save(zOldName);
    undo_save(zNewName);
    db_multi_exec(
      "UPDATE vfile SET pathname=%Q, origname=coalesce(origname,pathname)"
      " WHERE id=%d AND vid=%d", zNewName, idv, vid
    );
    if( !dryRunFlag ){
      char *zFullOldPath = mprintf("%s%s", g.zLocalRoot, zOldName);
      char *zFullNewPath = mprintf("%s%s", g.zLocalRoot, zNewName);







|
|







673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
    " WHERE idv>0 AND idp>0 AND idm>0 AND fnp=fn AND fnm!=fnp"
  );
  while( db_step(&q)==SQLITE_ROW ){
    int idv = db_column_int(&q, 0);
    const char *zOldName = db_column_text(&q, 1);
    const char *zNewName = db_column_text(&q, 2);
    fossil_print("RENAME %s -> %s\n", zOldName, zNewName);
    if( !dryRunFlag ) undo_save(zOldName);
    if( !dryRunFlag ) undo_save(zNewName);
    db_multi_exec(
      "UPDATE vfile SET pathname=%Q, origname=coalesce(origname,pathname)"
      " WHERE id=%d AND vid=%d", zNewName, idv, vid
    );
    if( !dryRunFlag ){
      char *zFullOldPath = mprintf("%s%s", g.zLocalRoot, zOldName);
      char *zFullNewPath = mprintf("%s%s", g.zLocalRoot, zNewName);
724
725
726
727
728
729
730
731
732
733
  }else if( backoutFlag ){
    db_multi_exec("INSERT OR IGNORE INTO vmerge(id,merge) VALUES(-2,%d)",pid);
  }else if( integrateFlag ){
    db_multi_exec("INSERT OR IGNORE INTO vmerge(id,merge) VALUES(-4,%d)",mid);
  }else{
    db_multi_exec("INSERT OR IGNORE INTO vmerge(id,merge) VALUES(0,%d)", mid);
  }
  undo_finish();
  db_end_transaction(dryRunFlag);
}







|


730
731
732
733
734
735
736
737
738
739
  }else if( backoutFlag ){
    db_multi_exec("INSERT OR IGNORE INTO vmerge(id,merge) VALUES(-2,%d)",pid);
  }else if( integrateFlag ){
    db_multi_exec("INSERT OR IGNORE INTO vmerge(id,merge) VALUES(-4,%d)",mid);
  }else{
    db_multi_exec("INSERT OR IGNORE INTO vmerge(id,merge) VALUES(0,%d)", mid);
  }
  if( !dryRunFlag ) undo_finish();
  db_end_transaction(dryRunFlag);
}
Changes to src/rebuild.c.
531
532
533
534
535
536
537

538
539
540
541
542
543
544
**   --deanalyze       Remove ANALYZE tables from the database
**   --force           Force the rebuild to complete even if errors are seen
**   --ifneeded        Only do the rebuild if it would change the schema version
**   --index           Always add in the full-text search index
**   --noverify        Skip the verification of changes to the BLOB table
**   --noindex         Always omit the full-text search index
**   --pagesize N      Set the database pagesize to N. (512..65536 and power of 2)

**   --randomize       Scan artifacts in a random order
**   --stats           Show artifact statistics after rebuilding
**   --vacuum          Run VACUUM on the database after rebuilding
**   --wal             Set Write-Ahead-Log journalling mode on the database
**
** See also: deconstruct, reconstruct
*/







>







531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
**   --deanalyze       Remove ANALYZE tables from the database
**   --force           Force the rebuild to complete even if errors are seen
**   --ifneeded        Only do the rebuild if it would change the schema version
**   --index           Always add in the full-text search index
**   --noverify        Skip the verification of changes to the BLOB table
**   --noindex         Always omit the full-text search index
**   --pagesize N      Set the database pagesize to N. (512..65536 and power of 2)
**   --quiet           Only show output if there are errors
**   --randomize       Scan artifacts in a random order
**   --stats           Show artifact statistics after rebuilding
**   --vacuum          Run VACUUM on the database after rebuilding
**   --wal             Set Write-Ahead-Log journalling mode on the database
**
** See also: deconstruct, reconstruct
*/
Changes to src/setup.c.
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
  const char *zId, *zLogin, *zInfo, *zCap, *zPw;
  const char *zGroup;
  const char *zOldLogin;
  int doWrite;
  int uid, i;
  int higherUser = 0;  /* True if user being edited is SETUP and the */
                       /* user doing the editing is ADMIN.  Disallow editing */
  char *inherit[128];
  int a[128];
  const char *oa[128];

  /* Must have ADMIN privileges to access this page
  */
  login_check_credentials();
  if( !g.perm.Admin ){ login_needed(0); return; }







|







326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
  const char *zId, *zLogin, *zInfo, *zCap, *zPw;
  const char *zGroup;
  const char *zOldLogin;
  int doWrite;
  int uid, i;
  int higherUser = 0;  /* True if user being edited is SETUP and the */
                       /* user doing the editing is ADMIN.  Disallow editing */
  const char *inherit[128];
  int a[128];
  const char *oa[128];

  /* Must have ADMIN privileges to access this page
  */
  login_check_credentials();
  if( !g.perm.Admin ){ login_needed(0); return; }
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
** Generate an entry box for an attribute.
*/
void entry_attribute(
  const char *zLabel,   /* The text label on the entry box */
  int width,            /* Width of the entry box */
  const char *zVar,     /* The corresponding row in the VAR table */
  const char *zQParm,   /* The query parameter */
  char *zDflt,          /* Default value if VAR table entry does not exist */
  int disabled          /* 1 if disabled */
){
  const char *zVal = db_get(zVar, zDflt);
  const char *zQ = P(zQParm);
  if( zQ && fossil_strcmp(zQ,zVal)!=0 ){
    const int nZQ = (int)strlen(zQ);
    login_verify_csrf_secret();







|







904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
** Generate an entry box for an attribute.
*/
void entry_attribute(
  const char *zLabel,   /* The text label on the entry box */
  int width,            /* Width of the entry box */
  const char *zVar,     /* The corresponding row in the VAR table */
  const char *zQParm,   /* The query parameter */
  const char *zDflt,    /* Default value if VAR table entry does not exist */
  int disabled          /* 1 if disabled */
){
  const char *zVal = db_get(zVar, zDflt);
  const char *zQ = P(zQParm);
  if( zQ && fossil_strcmp(zQ,zVal)!=0 ){
    const int nZQ = (int)strlen(zQ);
    login_verify_csrf_secret();
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
  int rows,             /* Rows in the textarea */
  int cols,             /* Columns in the textarea */
  const char *zVar,     /* The corresponding row in the VAR table */
  const char *zQP,      /* The query parameter */
  const char *zDflt,    /* Default value if VAR table entry does not exist */
  int disabled          /* 1 if the textarea should  not be editable */
){
  const char *z = db_get(zVar, (char*)zDflt);
  const char *zQ = P(zQP);
  if( zQ && !disabled && fossil_strcmp(zQ,z)!=0){
    const int nZQ = (int)strlen(zQ);
    login_verify_csrf_secret();
    db_set(zVar, zQ, 0);
    admin_log("Set textarea_attribute %Q to: %.*s%s",
              zVar, 20, zQ, (nZQ>20 ? "..." : ""));







|







936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
  int rows,             /* Rows in the textarea */
  int cols,             /* Columns in the textarea */
  const char *zVar,     /* The corresponding row in the VAR table */
  const char *zQP,      /* The query parameter */
  const char *zDflt,    /* Default value if VAR table entry does not exist */
  int disabled          /* 1 if the textarea should  not be editable */
){
  const char *z = db_get(zVar, zDflt);
  const char *zQ = P(zQP);
  if( zQ && !disabled && fossil_strcmp(zQ,z)!=0){
    const int nZQ = (int)strlen(zQ);
    login_verify_csrf_secret();
    db_set(zVar, zQ, 0);
    admin_log("Set textarea_attribute %Q to: %.*s%s",
              zVar, 20, zQ, (nZQ>20 ? "..." : ""));
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
  const char *zLabel,   /* The text label on the menu */
  const char *zVar,     /* The corresponding row in the VAR table */
  const char *zQP,      /* The query parameter */
  const char *zDflt,    /* Default value if VAR table entry does not exist */
  int nChoice,          /* Number of choices */
  const char *const *azChoice /* Choices. 2 per choice: (VAR value, Display) */
){
  const char *z = db_get(zVar, (char*)zDflt);
  const char *zQ = P(zQP);
  int i;
  if( zQ && fossil_strcmp(zQ,z)!=0){
    const int nZQ = (int)strlen(zQ);
    login_verify_csrf_secret();
    db_set(zVar, zQ, 0);
    admin_log("Set multiple_choice_attribute %Q to: %.*s%s",







|







970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
  const char *zLabel,   /* The text label on the menu */
  const char *zVar,     /* The corresponding row in the VAR table */
  const char *zQP,      /* The query parameter */
  const char *zDflt,    /* Default value if VAR table entry does not exist */
  int nChoice,          /* Number of choices */
  const char *const *azChoice /* Choices. 2 per choice: (VAR value, Display) */
){
  const char *z = db_get(zVar, zDflt);
  const char *zQ = P(zQP);
  int i;
  if( zQ && fossil_strcmp(zQ,z)!=0){
    const int nZQ = (int)strlen(zQ);
    login_verify_csrf_secret();
    db_set(zVar, zQ, 0);
    admin_log("Set multiple_choice_attribute %Q to: %.*s%s",
1470
1471
1472
1473
1474
1475
1476
1477

1478
1479
1480
1481
1482
1483
1484
                      (char*)pSet->def, hasVersionableValue);
      @<br />
    }
  }
  @ </td></tr></table>
  @ </div></form>
  @ <p>Settings marked with (v) are 'versionable' and will be overridden
  @ by the contents of files named <tt>.fossil-settings/PROPERTY</tt>.

  @ If such a file is present, the corresponding field above is not
  @ editable.</p><hr /><p>
  @ These settings work in the same way, as the <kbd>set</kbd>
  @ commandline:<br />
  @ </p><pre>%s(zHelp_setting_cmd)</pre>
  db_end_transaction(0);
  style_footer();







|
>







1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
                      (char*)pSet->def, hasVersionableValue);
      @<br />
    }
  }
  @ </td></tr></table>
  @ </div></form>
  @ <p>Settings marked with (v) are 'versionable' and will be overridden
  @ by the contents of files named <tt>.fossil-settings/PROPERTY</tt>
  @ in the check-out root.
  @ If such a file is present, the corresponding field above is not
  @ editable.</p><hr /><p>
  @ These settings work in the same way, as the <kbd>set</kbd>
  @ commandline:<br />
  @ </p><pre>%s(zHelp_setting_cmd)</pre>
  db_end_transaction(0);
  style_footer();
Changes to src/shell.c.
1315
1316
1317
1318
1319
1320
1321


1322
1323
1324
1325
1326
1327
1328
    fprintf(pArg->out, "Sort Operations:                     %d\n", iCur);
    iCur = sqlite3_stmt_status(pArg->pStmt, SQLITE_STMTSTATUS_AUTOINDEX,bReset);
    fprintf(pArg->out, "Autoindex Inserts:                   %d\n", iCur);
    iCur = sqlite3_stmt_status(pArg->pStmt, SQLITE_STMTSTATUS_VM_STEP, bReset);
    fprintf(pArg->out, "Virtual Machine Steps:               %d\n", iCur);
  }



  return 0;
}

/*
** Display scan stats.
*/
static void display_scanstats(







>
>







1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
    fprintf(pArg->out, "Sort Operations:                     %d\n", iCur);
    iCur = sqlite3_stmt_status(pArg->pStmt, SQLITE_STMTSTATUS_AUTOINDEX,bReset);
    fprintf(pArg->out, "Autoindex Inserts:                   %d\n", iCur);
    iCur = sqlite3_stmt_status(pArg->pStmt, SQLITE_STMTSTATUS_VM_STEP, bReset);
    fprintf(pArg->out, "Virtual Machine Steps:               %d\n", iCur);
  }

  /* Do not remove this machine readable comment: extra-stats-output-here */

  return 0;
}

/*
** Display scan stats.
*/
static void display_scanstats(
2606
2607
2608
2609
2610
2611
2612
















2613
2614
2615
2616
2617
2618
2619
    sqlite3_free(zSql);
    fprintf(p->out, "%-20s %d\n", aQuery[i].zName, val);
  }
  sqlite3_free(zSchemaTab);
  return 0;
}


















/*
** If an input line begins with "." then invoke this routine to
** process that line.
**
** Return 1 on error, 2 to exit, and 0 otherwise.
*/







>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>







2608
2609
2610
2611
2612
2613
2614
2615
2616
2617
2618
2619
2620
2621
2622
2623
2624
2625
2626
2627
2628
2629
2630
2631
2632
2633
2634
2635
2636
2637
    sqlite3_free(zSql);
    fprintf(p->out, "%-20s %d\n", aQuery[i].zName, val);
  }
  sqlite3_free(zSchemaTab);
  return 0;
}

/*
** Print the current sqlite3_errmsg() value to stderr and return 1.
*/
static int shellDatabaseError(sqlite3 *db){
  const char *zErr = sqlite3_errmsg(db);
  fprintf(stderr, "Error: %s\n", zErr);
  return 1;
}

/*
** Print an out-of-memory message to stderr and return 1.
*/
static int shellNomemError(void){
  fprintf(stderr, "Error: out of memory\n");
  return 1;
}

/*
** If an input line begins with "." then invoke this routine to
** process that line.
**
** Return 1 on error, 2 to exit, and 0 otherwise.
*/
3707
3708
3709
3710
3711
3712
3713
3714




3715
3716
3717
3718
3719
3720
3721
3722
3723
3724
3725
3726
3727
3728
3729
3730
3731
3732
3733
3734
3735
3736
3737
3738
3739

3740
3741

3742

3743



3744
3745
3746
3747
3748
3749
3750
3751
3752
3753
3754
3755
3756
3757
3758
3759
3760
3761
3762
3763
3764


3765


3766




3767
3768
3769
3770
3771
3772
3773
3774
3775
3776
3777
3778
3779
3780
3781
3782
3783
3784
3785

3786
3787
3788
3789
3790
3791
3792
    sqlite3_stmt *pStmt;
    char **azResult;
    int nRow, nAlloc;
    char *zSql = 0;
    int ii;
    open_db(p, 0);
    rc = sqlite3_prepare_v2(p->db, "PRAGMA database_list", -1, &pStmt, 0);
    if( rc ) return rc;




    zSql = sqlite3_mprintf(
        "SELECT name FROM sqlite_master"
        " WHERE type IN ('table','view')"
        "   AND name NOT LIKE 'sqlite_%%'"
        "   AND name LIKE ?1");
    while( sqlite3_step(pStmt)==SQLITE_ROW ){
      const char *zDbName = (const char*)sqlite3_column_text(pStmt, 1);
      if( zDbName==0 || strcmp(zDbName,"main")==0 ) continue;
      if( strcmp(zDbName,"temp")==0 ){
        zSql = sqlite3_mprintf(
                 "%z UNION ALL "
                 "SELECT 'temp.' || name FROM sqlite_temp_master"
                 " WHERE type IN ('table','view')"
                 "   AND name NOT LIKE 'sqlite_%%'"
                 "   AND name LIKE ?1", zSql);
      }else{
        zSql = sqlite3_mprintf(
                 "%z UNION ALL "
                 "SELECT '%q.' || name FROM \"%w\".sqlite_master"
                 " WHERE type IN ('table','view')"
                 "   AND name NOT LIKE 'sqlite_%%'"
                 "   AND name LIKE ?1", zSql, zDbName, zDbName);
      }
    }
    sqlite3_finalize(pStmt);

    zSql = sqlite3_mprintf("%z ORDER BY 1", zSql);
    rc = sqlite3_prepare_v2(p->db, zSql, -1, &pStmt, 0);

    sqlite3_free(zSql);

    if( rc ) return rc;



    nRow = nAlloc = 0;
    azResult = 0;
    if( nArg>1 ){
      sqlite3_bind_text(pStmt, 1, azArg[1], -1, SQLITE_TRANSIENT);
    }else{
      sqlite3_bind_text(pStmt, 1, "%", -1, SQLITE_STATIC);
    }
    while( sqlite3_step(pStmt)==SQLITE_ROW ){
      if( nRow>=nAlloc ){
        char **azNew;
        int n2 = nAlloc*2 + 10;
        azNew = sqlite3_realloc64(azResult, sizeof(azResult[0])*n2);
        if( azNew==0 ){
          fprintf(stderr, "Error: out of memory\n");
          break;
        }
        nAlloc = n2;
        azResult = azNew;
      }
      azResult[nRow] = sqlite3_mprintf("%s", sqlite3_column_text(pStmt, 0));
      if( azResult[nRow] ) nRow++;


    }


    sqlite3_finalize(pStmt);        




    if( nRow>0 ){
      int len, maxlen = 0;
      int i, j;
      int nPrintCol, nPrintRow;
      for(i=0; i<nRow; i++){
        len = strlen30(azResult[i]);
        if( len>maxlen ) maxlen = len;
      }
      nPrintCol = 80/(maxlen+2);
      if( nPrintCol<1 ) nPrintCol = 1;
      nPrintRow = (nRow + nPrintCol - 1)/nPrintCol;
      for(i=0; i<nPrintRow; i++){
        for(j=i; j<nRow; j+=nPrintRow){
          char *zSp = j<nPrintRow ? "" : "  ";
          fprintf(p->out, "%s%-*s", zSp, maxlen, azResult[j] ? azResult[j]:"");
        }
        fprintf(p->out, "\n");
      }
    }

    for(ii=0; ii<nRow; ii++) sqlite3_free(azResult[ii]);
    sqlite3_free(azResult);
  }else

  if( c=='t' && n>=8 && strncmp(azArg[0], "testctrl", n)==0 && nArg>=2 ){
    static const struct {
       const char *zCtrlName;   /* Name of a test-control option */







|
>
>
>
>





|


















|
>
|
|
>

>
|
>
>
>













|






|
>
>
|
>
>
|
>
>
>
>
|


















>







3725
3726
3727
3728
3729
3730
3731
3732
3733
3734
3735
3736
3737
3738
3739
3740
3741
3742
3743
3744
3745
3746
3747
3748
3749
3750
3751
3752
3753
3754
3755
3756
3757
3758
3759
3760
3761
3762
3763
3764
3765
3766
3767
3768
3769
3770
3771
3772
3773
3774
3775
3776
3777
3778
3779
3780
3781
3782
3783
3784
3785
3786
3787
3788
3789
3790
3791
3792
3793
3794
3795
3796
3797
3798
3799
3800
3801
3802
3803
3804
3805
3806
3807
3808
3809
3810
3811
3812
3813
3814
3815
3816
3817
3818
3819
3820
3821
3822
3823
3824
3825
3826
3827
3828
3829
    sqlite3_stmt *pStmt;
    char **azResult;
    int nRow, nAlloc;
    char *zSql = 0;
    int ii;
    open_db(p, 0);
    rc = sqlite3_prepare_v2(p->db, "PRAGMA database_list", -1, &pStmt, 0);
    if( rc ) return shellDatabaseError(p->db);

    /* Create an SQL statement to query for the list of tables in the
    ** main and all attached databases where the table name matches the
    ** LIKE pattern bound to variable "?1". */
    zSql = sqlite3_mprintf(
        "SELECT name FROM sqlite_master"
        " WHERE type IN ('table','view')"
        "   AND name NOT LIKE 'sqlite_%%'"
        "   AND name LIKE ?1");
    while( zSql && sqlite3_step(pStmt)==SQLITE_ROW ){
      const char *zDbName = (const char*)sqlite3_column_text(pStmt, 1);
      if( zDbName==0 || strcmp(zDbName,"main")==0 ) continue;
      if( strcmp(zDbName,"temp")==0 ){
        zSql = sqlite3_mprintf(
                 "%z UNION ALL "
                 "SELECT 'temp.' || name FROM sqlite_temp_master"
                 " WHERE type IN ('table','view')"
                 "   AND name NOT LIKE 'sqlite_%%'"
                 "   AND name LIKE ?1", zSql);
      }else{
        zSql = sqlite3_mprintf(
                 "%z UNION ALL "
                 "SELECT '%q.' || name FROM \"%w\".sqlite_master"
                 " WHERE type IN ('table','view')"
                 "   AND name NOT LIKE 'sqlite_%%'"
                 "   AND name LIKE ?1", zSql, zDbName, zDbName);
      }
    }
    rc = sqlite3_finalize(pStmt);
    if( zSql && rc==SQLITE_OK ){
      zSql = sqlite3_mprintf("%z ORDER BY 1", zSql);
      if( zSql ) rc = sqlite3_prepare_v2(p->db, zSql, -1, &pStmt, 0);
    }
    sqlite3_free(zSql);
    if( !zSql ) return shellNomemError();
    if( rc ) return shellDatabaseError(p->db);

    /* Run the SQL statement prepared by the above block. Store the results
    ** as an array of nul-terminated strings in azResult[].  */
    nRow = nAlloc = 0;
    azResult = 0;
    if( nArg>1 ){
      sqlite3_bind_text(pStmt, 1, azArg[1], -1, SQLITE_TRANSIENT);
    }else{
      sqlite3_bind_text(pStmt, 1, "%", -1, SQLITE_STATIC);
    }
    while( sqlite3_step(pStmt)==SQLITE_ROW ){
      if( nRow>=nAlloc ){
        char **azNew;
        int n2 = nAlloc*2 + 10;
        azNew = sqlite3_realloc64(azResult, sizeof(azResult[0])*n2);
        if( azNew==0 ){
          rc = shellNomemError();
          break;
        }
        nAlloc = n2;
        azResult = azNew;
      }
      azResult[nRow] = sqlite3_mprintf("%s", sqlite3_column_text(pStmt, 0));
      if( 0==azResult[nRow] ){
        rc = shellNomemError();
        break;
      }
      nRow++;
    }
    if( sqlite3_finalize(pStmt)!=SQLITE_OK ){
      rc = shellDatabaseError(p->db);
    }

    /* Pretty-print the contents of array azResult[] to the output */
    if( rc==0 && nRow>0 ){
      int len, maxlen = 0;
      int i, j;
      int nPrintCol, nPrintRow;
      for(i=0; i<nRow; i++){
        len = strlen30(azResult[i]);
        if( len>maxlen ) maxlen = len;
      }
      nPrintCol = 80/(maxlen+2);
      if( nPrintCol<1 ) nPrintCol = 1;
      nPrintRow = (nRow + nPrintCol - 1)/nPrintCol;
      for(i=0; i<nPrintRow; i++){
        for(j=i; j<nRow; j+=nPrintRow){
          char *zSp = j<nPrintRow ? "" : "  ";
          fprintf(p->out, "%s%-*s", zSp, maxlen, azResult[j] ? azResult[j]:"");
        }
        fprintf(p->out, "\n");
      }
    }

    for(ii=0; ii<nRow; ii++) sqlite3_free(azResult[ii]);
    sqlite3_free(azResult);
  }else

  if( c=='t' && n>=8 && strncmp(azArg[0], "testctrl", n)==0 && nArg>=2 ){
    static const struct {
       const char *zCtrlName;   /* Name of a test-control option */
4246
4247
4248
4249
4250
4251
4252
4253
4254

4255
4256
4257
4258
4259
4260
4261
    }
  }
  if( nSql ){
    if( !_all_whitespace(zSql) ){
      fprintf(stderr, "Error: incomplete SQL: %s\n", zSql);
      errCnt++;
    }
    free(zSql);
  }

  free(zLine);
  return errCnt>0;
}

/*
** Return a pathname which is the user's home directory.  A
** 0 return indicates an error of some kind.







<

>







4283
4284
4285
4286
4287
4288
4289

4290
4291
4292
4293
4294
4295
4296
4297
4298
    }
  }
  if( nSql ){
    if( !_all_whitespace(zSql) ){
      fprintf(stderr, "Error: incomplete SQL: %s\n", zSql);
      errCnt++;
    }

  }
  free(zSql);
  free(zLine);
  return errCnt>0;
}

/*
** Return a pathname which is the user's home directory.  A
** 0 return indicates an error of some kind.
Changes to src/sitemap.c.
113
114
115
116
117
118
119

120
121
122
123
124
125
126
    @   <li>%z(href("%R/hash-collisions"))Collisions on SHA1 hash
    @       prefixes</a></li>
    if( g.perm.Admin ){
      @   <li>%z(href("%R/urllist"))List of URLs used to access
      @       this repository</a></li>
    }
    @   <li>%z(href("%R/bloblist"))List of Artifacts</a></li>

    @   </ul>
    @ </li>
  }
  @ <li>On-line Documentation
  @   <ul>
  @   <li>%z(href("%R/help"))List of All Commands and Web Pages</a></li>
  @   <li>%z(href("%R/test-all-help"))All "help" text on a single page</a></li>







>







113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
    @   <li>%z(href("%R/hash-collisions"))Collisions on SHA1 hash
    @       prefixes</a></li>
    if( g.perm.Admin ){
      @   <li>%z(href("%R/urllist"))List of URLs used to access
      @       this repository</a></li>
    }
    @   <li>%z(href("%R/bloblist"))List of Artifacts</a></li>
    @   <li>%z(href("%R/timewarps"))List of "Timewarp" Check-ins</a></li>
    @   </ul>
    @ </li>
  }
  @ <li>On-line Documentation
  @   <ul>
  @   <li>%z(href("%R/help"))List of All Commands and Web Pages</a></li>
  @   <li>%z(href("%R/test-all-help"))All "help" text on a single page</a></li>
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
    @   </ul></li>
  }
  @ <li>Test Pages
  @   <ul>
  if( g.perm.Admin || db_get_boolean("test_env_enable",0) ){
    @   <li>%z(href("%R/test_env"))CGI Environment Test</a></li>
  }
  if( g.perm.Read && g.perm.Hyperlink ){
    @   <li>%z(href("%R/test_timewarps"))List of "Timewarp" Check-ins</a></li>
  }
  if( g.perm.Read ){
    @   <li>%z(href("%R/test-rename-list"))List of file renames</a></li>
  }
  @   <li>%z(href("%R/hash-color-test"))Page to experiment with the automatic
  @       colors assigned to branch names</a>
  @   <li>%z(href("%R/test-captcha"))Random ASCII-art Captcha image</a></li>
  @   </ul></li>
  @ </ul></li>
  style_footer();
}







<
<
<










136
137
138
139
140
141
142



143
144
145
146
147
148
149
150
151
152
    @   </ul></li>
  }
  @ <li>Test Pages
  @   <ul>
  if( g.perm.Admin || db_get_boolean("test_env_enable",0) ){
    @   <li>%z(href("%R/test_env"))CGI Environment Test</a></li>
  }



  if( g.perm.Read ){
    @   <li>%z(href("%R/test-rename-list"))List of file renames</a></li>
  }
  @   <li>%z(href("%R/hash-color-test"))Page to experiment with the automatic
  @       colors assigned to branch names</a>
  @   <li>%z(href("%R/test-captcha"))Random ASCII-art Captcha image</a></li>
  @   </ul></li>
  @ </ul></li>
  style_footer();
}
Changes to src/skins.c.
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
** attributes of the skin that cannot be easily specified using CSS
** or that need to be known on the server-side.
**
** The following array holds the value for all known skin details.
*/
static struct SkinDetail {
  const char *zName;      /* Name of the detail */
  char *zValue;           /* Value of the detail */
} aSkinDetail[] = {
  { "timeline-arrowheads",        "1"  },
  { "timeline-circle-nodes",      "0"  },
  { "timeline-color-graph-lines", "0"  },
  { "white-foreground",           "0"  },
};








|







71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
** attributes of the skin that cannot be easily specified using CSS
** or that need to be known on the server-side.
**
** The following array holds the value for all known skin details.
*/
static struct SkinDetail {
  const char *zName;      /* Name of the detail */
  const char *zValue;     /* Value of the detail */
} aSkinDetail[] = {
  { "timeline-arrowheads",        "1"  },
  { "timeline-circle-nodes",      "0"  },
  { "timeline-color-graph-lines", "0"  },
  { "white-foreground",           "0"  },
};

Changes to src/sqlite3.c.

more than 10,000 changes

Changes to src/sqlite3.h.
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
** string contains the date and time of the check-in (UTC) and an SHA1
** hash of the entire source tree.
**
** See also: [sqlite3_libversion()],
** [sqlite3_libversion_number()], [sqlite3_sourceid()],
** [sqlite_version()] and [sqlite_source_id()].
*/
#define SQLITE_VERSION        "3.8.11"
#define SQLITE_VERSION_NUMBER 3008011
#define SQLITE_SOURCE_ID      "2015-07-08 16:22:42 5348ffc3fda5168c1e9e14aa88b0c6aedbda7c94"

/*
** CAPI3REF: Run-Time Library Version Numbers
** KEYWORDS: sqlite3_version, sqlite3_sourceid
**
** These interfaces provide the same information as the [SQLITE_VERSION],
** [SQLITE_VERSION_NUMBER], and [SQLITE_SOURCE_ID] C preprocessor macros
** but are associated with the library instead of the header file.  ^(Cautious
** programmers might include assert() statements in their application to
** verify that values returned by these interfaces match the macros in
** the header, and thus insure that the application is
** compiled with matching library and header files.
**
** <blockquote><pre>
** assert( sqlite3_libversion_number()==SQLITE_VERSION_NUMBER );
** assert( strcmp(sqlite3_sourceid(),SQLITE_SOURCE_ID)==0 );
** assert( strcmp(sqlite3_libversion(),SQLITE_VERSION)==0 );
** </pre></blockquote>)^







|
|
|










|







107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
** string contains the date and time of the check-in (UTC) and an SHA1
** hash of the entire source tree.
**
** See also: [sqlite3_libversion()],
** [sqlite3_libversion_number()], [sqlite3_sourceid()],
** [sqlite_version()] and [sqlite_source_id()].
*/
#define SQLITE_VERSION        "3.9.1"
#define SQLITE_VERSION_NUMBER 3009001
#define SQLITE_SOURCE_ID      "2015-10-16 17:31:12 767c1727fec4ce11b83f25b3f1bfcfe68a2c8b02"

/*
** CAPI3REF: Run-Time Library Version Numbers
** KEYWORDS: sqlite3_version, sqlite3_sourceid
**
** These interfaces provide the same information as the [SQLITE_VERSION],
** [SQLITE_VERSION_NUMBER], and [SQLITE_SOURCE_ID] C preprocessor macros
** but are associated with the library instead of the header file.  ^(Cautious
** programmers might include assert() statements in their application to
** verify that values returned by these interfaces match the macros in
** the header, and thus ensure that the application is
** compiled with matching library and header files.
**
** <blockquote><pre>
** assert( sqlite3_libversion_number()==SQLITE_VERSION_NUMBER );
** assert( strcmp(sqlite3_sourceid(),SQLITE_SOURCE_ID)==0 );
** assert( strcmp(sqlite3_libversion(),SQLITE_VERSION)==0 );
** </pre></blockquote>)^
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
** to an empty string, or a pointer that contains only whitespace and/or 
** SQL comments, then no SQL statements are evaluated and the database
** is not changed.
**
** Restrictions:
**
** <ul>
** <li> The application must insure that the 1st parameter to sqlite3_exec()
**      is a valid and open [database connection].
** <li> The application must not close the [database connection] specified by
**      the 1st parameter to sqlite3_exec() while sqlite3_exec() is running.
** <li> The application must not modify the SQL statement text passed into
**      the 2nd parameter of sqlite3_exec() while sqlite3_exec() is running.
** </ul>
*/







|







370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
** to an empty string, or a pointer that contains only whitespace and/or 
** SQL comments, then no SQL statements are evaluated and the database
** is not changed.
**
** Restrictions:
**
** <ul>
** <li> The application must ensure that the 1st parameter to sqlite3_exec()
**      is a valid and open [database connection].
** <li> The application must not close the [database connection] specified by
**      the 1st parameter to sqlite3_exec() while sqlite3_exec() is running.
** <li> The application must not modify the SQL statement text passed into
**      the 2nd parameter of sqlite3_exec() while sqlite3_exec() is running.
** </ul>
*/
473
474
475
476
477
478
479

480
481
482
483
484
485
486
#define SQLITE_IOERR_SHMLOCK           (SQLITE_IOERR | (20<<8))
#define SQLITE_IOERR_SHMMAP            (SQLITE_IOERR | (21<<8))
#define SQLITE_IOERR_SEEK              (SQLITE_IOERR | (22<<8))
#define SQLITE_IOERR_DELETE_NOENT      (SQLITE_IOERR | (23<<8))
#define SQLITE_IOERR_MMAP              (SQLITE_IOERR | (24<<8))
#define SQLITE_IOERR_GETTEMPPATH       (SQLITE_IOERR | (25<<8))
#define SQLITE_IOERR_CONVPATH          (SQLITE_IOERR | (26<<8))

#define SQLITE_LOCKED_SHAREDCACHE      (SQLITE_LOCKED |  (1<<8))
#define SQLITE_BUSY_RECOVERY           (SQLITE_BUSY   |  (1<<8))
#define SQLITE_BUSY_SNAPSHOT           (SQLITE_BUSY   |  (2<<8))
#define SQLITE_CANTOPEN_NOTEMPDIR      (SQLITE_CANTOPEN | (1<<8))
#define SQLITE_CANTOPEN_ISDIR          (SQLITE_CANTOPEN | (2<<8))
#define SQLITE_CANTOPEN_FULLPATH       (SQLITE_CANTOPEN | (3<<8))
#define SQLITE_CANTOPEN_CONVPATH       (SQLITE_CANTOPEN | (4<<8))







>







473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
#define SQLITE_IOERR_SHMLOCK           (SQLITE_IOERR | (20<<8))
#define SQLITE_IOERR_SHMMAP            (SQLITE_IOERR | (21<<8))
#define SQLITE_IOERR_SEEK              (SQLITE_IOERR | (22<<8))
#define SQLITE_IOERR_DELETE_NOENT      (SQLITE_IOERR | (23<<8))
#define SQLITE_IOERR_MMAP              (SQLITE_IOERR | (24<<8))
#define SQLITE_IOERR_GETTEMPPATH       (SQLITE_IOERR | (25<<8))
#define SQLITE_IOERR_CONVPATH          (SQLITE_IOERR | (26<<8))
#define SQLITE_IOERR_VNODE             (SQLITE_IOERR | (27<<8))
#define SQLITE_LOCKED_SHAREDCACHE      (SQLITE_LOCKED |  (1<<8))
#define SQLITE_BUSY_RECOVERY           (SQLITE_BUSY   |  (1<<8))
#define SQLITE_BUSY_SNAPSHOT           (SQLITE_BUSY   |  (2<<8))
#define SQLITE_CANTOPEN_NOTEMPDIR      (SQLITE_CANTOPEN | (1<<8))
#define SQLITE_CANTOPEN_ISDIR          (SQLITE_CANTOPEN | (2<<8))
#define SQLITE_CANTOPEN_FULLPATH       (SQLITE_CANTOPEN | (3<<8))
#define SQLITE_CANTOPEN_CONVPATH       (SQLITE_CANTOPEN | (4<<8))
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
** circumstances in order to fix a problem with priority inversion.
** Applications should <em>not</em> use this file-control.
**
** <li>[[SQLITE_FCNTL_ZIPVFS]]
** The [SQLITE_FCNTL_ZIPVFS] opcode is implemented by zipvfs only. All other
** VFS should return SQLITE_NOTFOUND for this opcode.
**
** <li>[[SQLITE_FCNTL_OTA]]
** The [SQLITE_FCNTL_OTA] opcode is implemented by the special VFS used by
** the OTA extension only.  All other VFS should return SQLITE_NOTFOUND for
** this opcode.  
** </ul>
*/
#define SQLITE_FCNTL_LOCKSTATE               1
#define SQLITE_FCNTL_GET_LOCKPROXYFILE       2
#define SQLITE_FCNTL_SET_LOCKPROXYFILE       3
#define SQLITE_FCNTL_LAST_ERRNO              4







|
|
|







964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
** circumstances in order to fix a problem with priority inversion.
** Applications should <em>not</em> use this file-control.
**
** <li>[[SQLITE_FCNTL_ZIPVFS]]
** The [SQLITE_FCNTL_ZIPVFS] opcode is implemented by zipvfs only. All other
** VFS should return SQLITE_NOTFOUND for this opcode.
**
** <li>[[SQLITE_FCNTL_RBU]]
** The [SQLITE_FCNTL_RBU] opcode is implemented by the special VFS used by
** the RBU extension only.  All other VFS should return SQLITE_NOTFOUND for
** this opcode.  
** </ul>
*/
#define SQLITE_FCNTL_LOCKSTATE               1
#define SQLITE_FCNTL_GET_LOCKPROXYFILE       2
#define SQLITE_FCNTL_SET_LOCKPROXYFILE       3
#define SQLITE_FCNTL_LAST_ERRNO              4
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
#define SQLITE_FCNTL_TRACE                  19
#define SQLITE_FCNTL_HAS_MOVED              20
#define SQLITE_FCNTL_SYNC                   21
#define SQLITE_FCNTL_COMMIT_PHASETWO        22
#define SQLITE_FCNTL_WIN32_SET_HANDLE       23
#define SQLITE_FCNTL_WAL_BLOCK              24
#define SQLITE_FCNTL_ZIPVFS                 25
#define SQLITE_FCNTL_OTA                    26

/* deprecated names */
#define SQLITE_GET_LOCKPROXYFILE      SQLITE_FCNTL_GET_LOCKPROXYFILE
#define SQLITE_SET_LOCKPROXYFILE      SQLITE_FCNTL_SET_LOCKPROXYFILE
#define SQLITE_LAST_ERRNO             SQLITE_FCNTL_LAST_ERRNO









|







994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
#define SQLITE_FCNTL_TRACE                  19
#define SQLITE_FCNTL_HAS_MOVED              20
#define SQLITE_FCNTL_SYNC                   21
#define SQLITE_FCNTL_COMMIT_PHASETWO        22
#define SQLITE_FCNTL_WIN32_SET_HANDLE       23
#define SQLITE_FCNTL_WAL_BLOCK              24
#define SQLITE_FCNTL_ZIPVFS                 25
#define SQLITE_FCNTL_RBU                    26

/* deprecated names */
#define SQLITE_GET_LOCKPROXYFILE      SQLITE_FCNTL_GET_LOCKPROXYFILE
#define SQLITE_SET_LOCKPROXYFILE      SQLITE_FCNTL_SET_LOCKPROXYFILE
#define SQLITE_LAST_ERRNO             SQLITE_FCNTL_LAST_ERRNO


1362
1363
1364
1365
1366
1367
1368
1369
1370
1371


1372
1373
1374
1375
1376
1377
1378
**
** The sqlite3_config() interface is used to make global configuration
** changes to SQLite in order to tune SQLite to the specific needs of
** the application.  The default configuration is recommended for most
** applications and so this routine is usually not necessary.  It is
** provided to support rare applications with unusual needs.
**
** The sqlite3_config() interface is not threadsafe.  The application
** must insure that no other SQLite interfaces are invoked by other
** threads while sqlite3_config() is running.  Furthermore, sqlite3_config()


** may only be invoked prior to library initialization using
** [sqlite3_initialize()] or after shutdown by [sqlite3_shutdown()].
** ^If sqlite3_config() is called after [sqlite3_initialize()] and before
** [sqlite3_shutdown()] then it will return SQLITE_MISUSE.
** Note, however, that ^sqlite3_config() can be called as part of the
** implementation of an application-defined [sqlite3_os_init()].
**







|
|
|
>
>







1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
**
** The sqlite3_config() interface is used to make global configuration
** changes to SQLite in order to tune SQLite to the specific needs of
** the application.  The default configuration is recommended for most
** applications and so this routine is usually not necessary.  It is
** provided to support rare applications with unusual needs.
**
** <b>The sqlite3_config() interface is not threadsafe. The application
** must ensure that no other SQLite interfaces are invoked by other
** threads while sqlite3_config() is running.</b>
**
** The sqlite3_config() interface
** may only be invoked prior to library initialization using
** [sqlite3_initialize()] or after shutdown by [sqlite3_shutdown()].
** ^If sqlite3_config() is called after [sqlite3_initialize()] and before
** [sqlite3_shutdown()] then it will return SQLITE_MISUSE.
** Note, however, that ^sqlite3_config() can be called as part of the
** implementation of an application-defined [sqlite3_os_init()].
**
3369
3370
3371
3372
3373
3374
3375
3376

3377
3378
3379
3380
3381
3382
3383

/*
** CAPI3REF: Determine If A Prepared Statement Has Been Reset
** METHOD: sqlite3_stmt
**
** ^The sqlite3_stmt_busy(S) interface returns true (non-zero) if the
** [prepared statement] S has been stepped at least once using 
** [sqlite3_step(S)] but has not run to completion and/or has not 

** been reset using [sqlite3_reset(S)].  ^The sqlite3_stmt_busy(S)
** interface returns false if S is a NULL pointer.  If S is not a 
** NULL pointer and is not a pointer to a valid [prepared statement]
** object, then the behavior is undefined and probably undesirable.
**
** This interface can be used in combination [sqlite3_next_stmt()]
** to locate all prepared statements associated with a database 







|
>







3372
3373
3374
3375
3376
3377
3378
3379
3380
3381
3382
3383
3384
3385
3386
3387

/*
** CAPI3REF: Determine If A Prepared Statement Has Been Reset
** METHOD: sqlite3_stmt
**
** ^The sqlite3_stmt_busy(S) interface returns true (non-zero) if the
** [prepared statement] S has been stepped at least once using 
** [sqlite3_step(S)] but has neither run to completion (returned
** [SQLITE_DONE] from [sqlite3_step(S)]) nor
** been reset using [sqlite3_reset(S)].  ^The sqlite3_stmt_busy(S)
** interface returns false if S is a NULL pointer.  If S is not a 
** NULL pointer and is not a pointer to a valid [prepared statement]
** object, then the behavior is undefined and probably undesirable.
**
** This interface can be used in combination [sqlite3_next_stmt()]
** to locate all prepared statements associated with a database 
3558
3559
3560
3561
3562
3563
3564

3565
3566
3567
3568
3569
3570
3571
SQLITE_API int SQLITE_STDCALL sqlite3_bind_null(sqlite3_stmt*, int);
SQLITE_API int SQLITE_STDCALL sqlite3_bind_text(sqlite3_stmt*,int,const char*,int,void(*)(void*));
SQLITE_API int SQLITE_STDCALL sqlite3_bind_text16(sqlite3_stmt*, int, const void*, int, void(*)(void*));
SQLITE_API int SQLITE_STDCALL sqlite3_bind_text64(sqlite3_stmt*, int, const char*, sqlite3_uint64,
                         void(*)(void*), unsigned char encoding);
SQLITE_API int SQLITE_STDCALL sqlite3_bind_value(sqlite3_stmt*, int, const sqlite3_value*);
SQLITE_API int SQLITE_STDCALL sqlite3_bind_zeroblob(sqlite3_stmt*, int, int n);


/*
** CAPI3REF: Number Of SQL Parameters
** METHOD: sqlite3_stmt
**
** ^This routine can be used to find the number of [SQL parameters]
** in a [prepared statement].  SQL parameters are tokens of the







>







3562
3563
3564
3565
3566
3567
3568
3569
3570
3571
3572
3573
3574
3575
3576
SQLITE_API int SQLITE_STDCALL sqlite3_bind_null(sqlite3_stmt*, int);
SQLITE_API int SQLITE_STDCALL sqlite3_bind_text(sqlite3_stmt*,int,const char*,int,void(*)(void*));
SQLITE_API int SQLITE_STDCALL sqlite3_bind_text16(sqlite3_stmt*, int, const void*, int, void(*)(void*));
SQLITE_API int SQLITE_STDCALL sqlite3_bind_text64(sqlite3_stmt*, int, const char*, sqlite3_uint64,
                         void(*)(void*), unsigned char encoding);
SQLITE_API int SQLITE_STDCALL sqlite3_bind_value(sqlite3_stmt*, int, const sqlite3_value*);
SQLITE_API int SQLITE_STDCALL sqlite3_bind_zeroblob(sqlite3_stmt*, int, int n);
SQLITE_API int SQLITE_STDCALL sqlite3_bind_zeroblob64(sqlite3_stmt*, int, sqlite3_uint64);

/*
** CAPI3REF: Number Of SQL Parameters
** METHOD: sqlite3_stmt
**
** ^This routine can be used to find the number of [SQL parameters]
** in a [prepared statement].  SQL parameters are tokens of the
3621
3622
3623
3624
3625
3626
3627
3628
3629
3630
3631
3632
3633
3634
3635
** parameter to [sqlite3_bind_blob|sqlite3_bind()].  ^A zero
** is returned if no matching parameter is found.  ^The parameter
** name must be given in UTF-8 even if the original statement
** was prepared from UTF-16 text using [sqlite3_prepare16_v2()].
**
** See also: [sqlite3_bind_blob|sqlite3_bind()],
** [sqlite3_bind_parameter_count()], and
** [sqlite3_bind_parameter_index()].
*/
SQLITE_API int SQLITE_STDCALL sqlite3_bind_parameter_index(sqlite3_stmt*, const char *zName);

/*
** CAPI3REF: Reset All Bindings On A Prepared Statement
** METHOD: sqlite3_stmt
**







|







3626
3627
3628
3629
3630
3631
3632
3633
3634
3635
3636
3637
3638
3639
3640
** parameter to [sqlite3_bind_blob|sqlite3_bind()].  ^A zero
** is returned if no matching parameter is found.  ^The parameter
** name must be given in UTF-8 even if the original statement
** was prepared from UTF-16 text using [sqlite3_prepare16_v2()].
**
** See also: [sqlite3_bind_blob|sqlite3_bind()],
** [sqlite3_bind_parameter_count()], and
** [sqlite3_bind_parameter_name()].
*/
SQLITE_API int SQLITE_STDCALL sqlite3_bind_parameter_index(sqlite3_stmt*, const char *zName);

/*
** CAPI3REF: Reset All Bindings On A Prepared Statement
** METHOD: sqlite3_stmt
**
4350
4351
4352
4353
4354
4355
4356
















4357
4358
4359
4360
4361
4362
4363
SQLITE_API const unsigned char *SQLITE_STDCALL sqlite3_value_text(sqlite3_value*);
SQLITE_API const void *SQLITE_STDCALL sqlite3_value_text16(sqlite3_value*);
SQLITE_API const void *SQLITE_STDCALL sqlite3_value_text16le(sqlite3_value*);
SQLITE_API const void *SQLITE_STDCALL sqlite3_value_text16be(sqlite3_value*);
SQLITE_API int SQLITE_STDCALL sqlite3_value_type(sqlite3_value*);
SQLITE_API int SQLITE_STDCALL sqlite3_value_numeric_type(sqlite3_value*);

















/*
** CAPI3REF: Copy And Free SQL Values
** METHOD: sqlite3_value
**
** ^The sqlite3_value_dup(V) interface makes a copy of the [sqlite3_value]
** object D and returns a pointer to that copy.  ^The [sqlite3_value] returned
** is a [protected sqlite3_value] object even if the input is not.







>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>







4355
4356
4357
4358
4359
4360
4361
4362
4363
4364
4365
4366
4367
4368
4369
4370
4371
4372
4373
4374
4375
4376
4377
4378
4379
4380
4381
4382
4383
4384
SQLITE_API const unsigned char *SQLITE_STDCALL sqlite3_value_text(sqlite3_value*);
SQLITE_API const void *SQLITE_STDCALL sqlite3_value_text16(sqlite3_value*);
SQLITE_API const void *SQLITE_STDCALL sqlite3_value_text16le(sqlite3_value*);
SQLITE_API const void *SQLITE_STDCALL sqlite3_value_text16be(sqlite3_value*);
SQLITE_API int SQLITE_STDCALL sqlite3_value_type(sqlite3_value*);
SQLITE_API int SQLITE_STDCALL sqlite3_value_numeric_type(sqlite3_value*);

/*
** CAPI3REF: Finding The Subtype Of SQL Values
** METHOD: sqlite3_value
**
** The sqlite3_value_subtype(V) function returns the subtype for
** an [application-defined SQL function] argument V.  The subtype
** information can be used to pass a limited amount of context from
** one SQL function to another.  Use the [sqlite3_result_subtype()]
** routine to set the subtype for the return value of an SQL function.
**
** SQLite makes no use of subtype itself.  It merely passes the subtype
** from the result of one [application-defined SQL function] into the
** input of another.
*/
SQLITE_API unsigned int SQLITE_STDCALL sqlite3_value_subtype(sqlite3_value*);

/*
** CAPI3REF: Copy And Free SQL Values
** METHOD: sqlite3_value
**
** ^The sqlite3_value_dup(V) interface makes a copy of the [sqlite3_value]
** object D and returns a pointer to that copy.  ^The [sqlite3_value] returned
** is a [protected sqlite3_value] object even if the input is not.
4530
4531
4532
4533
4534
4535
4536
4537
4538
4539
4540
4541
4542
4543
4544
4545
4546
** Refer to the [SQL parameter] documentation for additional information.
**
** ^The sqlite3_result_blob() interface sets the result from
** an application-defined function to be the BLOB whose content is pointed
** to by the second parameter and which is N bytes long where N is the
** third parameter.
**
** ^The sqlite3_result_zeroblob() interfaces set the result of
** the application-defined function to be a BLOB containing all zero
** bytes and N bytes in size, where N is the value of the 2nd parameter.
**
** ^The sqlite3_result_double() interface sets the result from
** an application-defined function to be a floating point value specified
** by its 2nd argument.
**
** ^The sqlite3_result_error() and sqlite3_result_error16() functions
** cause the implemented SQL function to throw an exception.







|
|
|







4551
4552
4553
4554
4555
4556
4557
4558
4559
4560
4561
4562
4563
4564
4565
4566
4567
** Refer to the [SQL parameter] documentation for additional information.
**
** ^The sqlite3_result_blob() interface sets the result from
** an application-defined function to be the BLOB whose content is pointed
** to by the second parameter and which is N bytes long where N is the
** third parameter.
**
** ^The sqlite3_result_zeroblob(C,N) and sqlite3_result_zeroblob64(C,N)
** interfaces set the result of the application-defined function to be
** a BLOB containing all zero bytes and N bytes in size.
**
** ^The sqlite3_result_double() interface sets the result from
** an application-defined function to be a floating point value specified
** by its 2nd argument.
**
** ^The sqlite3_result_error() and sqlite3_result_error16() functions
** cause the implemented SQL function to throw an exception.
4647
4648
4649
4650
4651
4652
4653
















4654
4655
4656
4657
4658
4659
4660
SQLITE_API void SQLITE_STDCALL sqlite3_result_text64(sqlite3_context*, const char*,sqlite3_uint64,
                           void(*)(void*), unsigned char encoding);
SQLITE_API void SQLITE_STDCALL sqlite3_result_text16(sqlite3_context*, const void*, int, void(*)(void*));
SQLITE_API void SQLITE_STDCALL sqlite3_result_text16le(sqlite3_context*, const void*, int,void(*)(void*));
SQLITE_API void SQLITE_STDCALL sqlite3_result_text16be(sqlite3_context*, const void*, int,void(*)(void*));
SQLITE_API void SQLITE_STDCALL sqlite3_result_value(sqlite3_context*, sqlite3_value*);
SQLITE_API void SQLITE_STDCALL sqlite3_result_zeroblob(sqlite3_context*, int n);

















/*
** CAPI3REF: Define New Collating Sequences
** METHOD: sqlite3
**
** ^These functions add, remove, or modify a [collation] associated
** with the [database connection] specified as the first argument.







>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>







4668
4669
4670
4671
4672
4673
4674
4675
4676
4677
4678
4679
4680
4681
4682
4683
4684
4685
4686
4687
4688
4689
4690
4691
4692
4693
4694
4695
4696
4697
SQLITE_API void SQLITE_STDCALL sqlite3_result_text64(sqlite3_context*, const char*,sqlite3_uint64,
                           void(*)(void*), unsigned char encoding);
SQLITE_API void SQLITE_STDCALL sqlite3_result_text16(sqlite3_context*, const void*, int, void(*)(void*));
SQLITE_API void SQLITE_STDCALL sqlite3_result_text16le(sqlite3_context*, const void*, int,void(*)(void*));
SQLITE_API void SQLITE_STDCALL sqlite3_result_text16be(sqlite3_context*, const void*, int,void(*)(void*));
SQLITE_API void SQLITE_STDCALL sqlite3_result_value(sqlite3_context*, sqlite3_value*);
SQLITE_API void SQLITE_STDCALL sqlite3_result_zeroblob(sqlite3_context*, int n);
SQLITE_API int SQLITE_STDCALL sqlite3_result_zeroblob64(sqlite3_context*, sqlite3_uint64 n);


/*
** CAPI3REF: Setting The Subtype Of An SQL Function
** METHOD: sqlite3_context
**
** The sqlite3_result_subtype(C,T) function causes the subtype of
** the result from the [application-defined SQL function] with 
** [sqlite3_context] C to be the value T.  Only the lower 8 bits 
** of the subtype T are preserved in current versions of SQLite;
** higher order bits are discarded.
** The number of subtype bytes preserved by SQLite might increase
** in future releases of SQLite.
*/
SQLITE_API void SQLITE_STDCALL sqlite3_result_subtype(sqlite3_context*,unsigned int);

/*
** CAPI3REF: Define New Collating Sequences
** METHOD: sqlite3
**
** ^These functions add, remove, or modify a [collation] associated
** with the [database connection] specified as the first argument.
5592
5593
5594
5595
5596
5597
5598















5599
5600
5601
5602
5603
5604
5605
5606



5607
5608
5609
5610
5611
5612
5613
** strategy. A cost of N indicates that the cost of the strategy is similar
** to a linear scan of an SQLite table with N rows. A cost of log(N) 
** indicates that the expense of the operation is similar to that of a
** binary search on a unique indexed field of an SQLite table with N rows.
**
** ^The estimatedRows value is an estimate of the number of rows that
** will be returned by the strategy.















**
** IMPORTANT: The estimatedRows field was added to the sqlite3_index_info
** structure for SQLite version 3.8.2. If a virtual table extension is
** used with an SQLite version earlier than 3.8.2, the results of attempting 
** to read or write the estimatedRows field are undefined (but are likely 
** to included crashing the application). The estimatedRows field should
** therefore only be used if [sqlite3_libversion_number()] returns a
** value greater than or equal to 3008002.



*/
struct sqlite3_index_info {
  /* Inputs */
  int nConstraint;           /* Number of entries in aConstraint */
  struct sqlite3_index_constraint {
     int iColumn;              /* Column on left-hand side of constraint */
     unsigned char op;         /* Constraint operator */







>
>
>
>
>
>
>
>
>
>
>
>
>
>
>







|
>
>
>







5629
5630
5631
5632
5633
5634
5635
5636
5637
5638
5639
5640
5641
5642
5643
5644
5645
5646
5647
5648
5649
5650
5651
5652
5653
5654
5655
5656
5657
5658
5659
5660
5661
5662
5663
5664
5665
5666
5667
5668
** strategy. A cost of N indicates that the cost of the strategy is similar
** to a linear scan of an SQLite table with N rows. A cost of log(N) 
** indicates that the expense of the operation is similar to that of a
** binary search on a unique indexed field of an SQLite table with N rows.
**
** ^The estimatedRows value is an estimate of the number of rows that
** will be returned by the strategy.
**
** The xBestIndex method may optionally populate the idxFlags field with a 
** mask of SQLITE_INDEX_SCAN_* flags. Currently there is only one such flag -
** SQLITE_INDEX_SCAN_UNIQUE. If the xBestIndex method sets this flag, SQLite
** assumes that the strategy may visit at most one row. 
**
** Additionally, if xBestIndex sets the SQLITE_INDEX_SCAN_UNIQUE flag, then
** SQLite also assumes that if a call to the xUpdate() method is made as
** part of the same statement to delete or update a virtual table row and the
** implementation returns SQLITE_CONSTRAINT, then there is no need to rollback
** any database changes. In other words, if the xUpdate() returns
** SQLITE_CONSTRAINT, the database contents must be exactly as they were
** before xUpdate was called. By contrast, if SQLITE_INDEX_SCAN_UNIQUE is not
** set and xUpdate returns SQLITE_CONSTRAINT, any database changes made by
** the xUpdate method are automatically rolled back by SQLite.
**
** IMPORTANT: The estimatedRows field was added to the sqlite3_index_info
** structure for SQLite version 3.8.2. If a virtual table extension is
** used with an SQLite version earlier than 3.8.2, the results of attempting 
** to read or write the estimatedRows field are undefined (but are likely 
** to included crashing the application). The estimatedRows field should
** therefore only be used if [sqlite3_libversion_number()] returns a
** value greater than or equal to 3008002. Similarly, the idxFlags field
** was added for version 3.9.0. It may therefore only be used if
** sqlite3_libversion_number() returns a value greater than or equal to
** 3009000.
*/
struct sqlite3_index_info {
  /* Inputs */
  int nConstraint;           /* Number of entries in aConstraint */
  struct sqlite3_index_constraint {
     int iColumn;              /* Column on left-hand side of constraint */
     unsigned char op;         /* Constraint operator */
5627
5628
5629
5630
5631
5632
5633


5634
5635





5636
5637
5638
5639
5640
5641
5642
  int idxNum;                /* Number used to identify the index */
  char *idxStr;              /* String, possibly obtained from sqlite3_malloc */
  int needToFreeIdxStr;      /* Free idxStr using sqlite3_free() if true */
  int orderByConsumed;       /* True if output is already ordered */
  double estimatedCost;           /* Estimated cost of using this index */
  /* Fields below are only available in SQLite 3.8.2 and later */
  sqlite3_int64 estimatedRows;    /* Estimated number of rows returned */


};






/*
** CAPI3REF: Virtual Table Constraint Operator Codes
**
** These macros defined the allowed values for the
** [sqlite3_index_info].aConstraint[].op field.  Each value represents
** an operator that is part of a constraint term in the wHERE clause of
** a query that uses a [virtual table].







>
>


>
>
>
>
>







5682
5683
5684
5685
5686
5687
5688
5689
5690
5691
5692
5693
5694
5695
5696
5697
5698
5699
5700
5701
5702
5703
5704
  int idxNum;                /* Number used to identify the index */
  char *idxStr;              /* String, possibly obtained from sqlite3_malloc */
  int needToFreeIdxStr;      /* Free idxStr using sqlite3_free() if true */
  int orderByConsumed;       /* True if output is already ordered */
  double estimatedCost;           /* Estimated cost of using this index */
  /* Fields below are only available in SQLite 3.8.2 and later */
  sqlite3_int64 estimatedRows;    /* Estimated number of rows returned */
  /* Fields below are only available in SQLite 3.9.0 and later */
  int idxFlags;              /* Mask of SQLITE_INDEX_SCAN_* flags */
};

/*
** CAPI3REF: Virtual Table Scan Flags
*/
#define SQLITE_INDEX_SCAN_UNIQUE      1     /* Scan visits at most 1 row */

/*
** CAPI3REF: Virtual Table Constraint Operator Codes
**
** These macros defined the allowed values for the
** [sqlite3_index_info].aConstraint[].op field.  Each value represents
** an operator that is part of a constraint term in the wHERE clause of
** a query that uses a [virtual table].
6086
6087
6088
6089
6090
6091
6092



6093
6094
6095
6096
6097
6098
6099
** <li>  SQLITE_MUTEX_STATIC_OPEN
** <li>  SQLITE_MUTEX_STATIC_PRNG
** <li>  SQLITE_MUTEX_STATIC_LRU
** <li>  SQLITE_MUTEX_STATIC_PMEM
** <li>  SQLITE_MUTEX_STATIC_APP1
** <li>  SQLITE_MUTEX_STATIC_APP2
** <li>  SQLITE_MUTEX_STATIC_APP3



** </ul>
**
** ^The first two constants (SQLITE_MUTEX_FAST and SQLITE_MUTEX_RECURSIVE)
** cause sqlite3_mutex_alloc() to create
** a new mutex.  ^The new mutex is recursive when SQLITE_MUTEX_RECURSIVE
** is used but not necessarily so when SQLITE_MUTEX_FAST is used.
** The mutex implementation does not need to make a distinction







>
>
>







6148
6149
6150
6151
6152
6153
6154
6155
6156
6157
6158
6159
6160
6161
6162
6163
6164
** <li>  SQLITE_MUTEX_STATIC_OPEN
** <li>  SQLITE_MUTEX_STATIC_PRNG
** <li>  SQLITE_MUTEX_STATIC_LRU
** <li>  SQLITE_MUTEX_STATIC_PMEM
** <li>  SQLITE_MUTEX_STATIC_APP1
** <li>  SQLITE_MUTEX_STATIC_APP2
** <li>  SQLITE_MUTEX_STATIC_APP3
** <li>  SQLITE_MUTEX_STATIC_VFS1
** <li>  SQLITE_MUTEX_STATIC_VFS2
** <li>  SQLITE_MUTEX_STATIC_VFS3
** </ul>
**
** ^The first two constants (SQLITE_MUTEX_FAST and SQLITE_MUTEX_RECURSIVE)
** cause sqlite3_mutex_alloc() to create
** a new mutex.  ^The new mutex is recursive when SQLITE_MUTEX_RECURSIVE
** is used but not necessarily so when SQLITE_MUTEX_FAST is used.
** The mutex implementation does not need to make a distinction
7851
7852
7853
7854
7855
7856
7857
7858










































































































































































































































































































































































































































































































































#ifdef __cplusplus
}  /* end of the 'extern "C"' block */
#endif

#endif  /* ifndef _SQLITE3RTREE_H_ */

















































































































































































































































































































































































































































































































































>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
7916
7917
7918
7919
7920
7921
7922
7923
7924
7925
7926
7927
7928
7929
7930
7931
7932
7933
7934
7935
7936
7937
7938
7939
7940
7941
7942
7943
7944
7945
7946
7947
7948
7949
7950
7951
7952
7953
7954
7955
7956
7957
7958
7959
7960
7961
7962
7963
7964
7965
7966
7967
7968
7969
7970
7971
7972
7973
7974
7975
7976
7977
7978
7979
7980
7981
7982
7983
7984
7985
7986
7987
7988
7989
7990
7991
7992
7993
7994
7995
7996
7997
7998
7999
8000
8001
8002
8003
8004
8005
8006
8007
8008
8009
8010
8011
8012
8013
8014
8015
8016
8017
8018
8019
8020
8021
8022
8023
8024
8025
8026
8027
8028
8029
8030
8031
8032
8033
8034
8035
8036
8037
8038
8039
8040
8041
8042
8043
8044
8045
8046
8047
8048
8049
8050
8051
8052
8053
8054
8055
8056
8057
8058
8059
8060
8061
8062
8063
8064
8065
8066
8067
8068
8069
8070
8071
8072
8073
8074
8075
8076
8077
8078
8079
8080
8081
8082
8083
8084
8085
8086
8087
8088
8089
8090
8091
8092
8093
8094
8095
8096
8097
8098
8099
8100
8101
8102
8103
8104
8105
8106
8107
8108
8109
8110
8111
8112
8113
8114
8115
8116
8117
8118
8119
8120
8121
8122
8123
8124
8125
8126
8127
8128
8129
8130
8131
8132
8133
8134
8135
8136
8137
8138
8139
8140
8141
8142
8143
8144
8145
8146
8147
8148
8149
8150
8151
8152
8153
8154
8155
8156
8157
8158
8159
8160
8161
8162
8163
8164
8165
8166
8167
8168
8169
8170
8171
8172
8173
8174
8175
8176
8177
8178
8179
8180
8181
8182
8183
8184
8185
8186
8187
8188
8189
8190
8191
8192
8193
8194
8195
8196
8197
8198
8199
8200
8201
8202
8203
8204
8205
8206
8207
8208
8209
8210
8211
8212
8213
8214
8215
8216
8217
8218
8219
8220
8221
8222
8223
8224
8225
8226
8227
8228
8229
8230
8231
8232
8233
8234
8235
8236
8237
8238
8239
8240
8241
8242
8243
8244
8245
8246
8247
8248
8249
8250
8251
8252
8253
8254
8255
8256
8257
8258
8259
8260
8261
8262
8263
8264
8265
8266
8267
8268
8269
8270
8271
8272
8273
8274
8275
8276
8277
8278
8279
8280
8281
8282
8283
8284
8285
8286
8287
8288
8289
8290
8291
8292
8293
8294
8295
8296
8297
8298
8299
8300
8301
8302
8303
8304
8305
8306
8307
8308
8309
8310
8311
8312
8313
8314
8315
8316
8317
8318
8319
8320
8321
8322
8323
8324
8325
8326
8327
8328
8329
8330
8331
8332
8333
8334
8335
8336
8337
8338
8339
8340
8341
8342
8343
8344
8345
8346
8347
8348
8349
8350
8351
8352
8353
8354
8355
8356
8357
8358
8359
8360
8361
8362
8363
8364
8365
8366
8367
8368
8369
8370
8371
8372
8373
8374
8375
8376
8377
8378
8379
8380
8381
8382
8383
8384
8385
8386
8387
8388
8389
8390
8391
8392
8393
8394
8395
8396
8397
8398
8399
8400
8401
8402
8403
8404
8405
8406
8407
8408
8409
8410
8411
8412
8413
8414
8415
8416
8417
8418
8419
8420
8421
8422
8423
8424
8425
8426
8427
8428
8429
8430
8431
8432
8433
8434
8435
8436
8437
8438
8439
8440
8441
8442
8443


#ifdef __cplusplus
}  /* end of the 'extern "C"' block */
#endif

#endif  /* ifndef _SQLITE3RTREE_H_ */

/*
** 2014 May 31
**
** The author disclaims copyright to this source code.  In place of
** a legal notice, here is a blessing:
**
**    May you do good and not evil.
**    May you find forgiveness for yourself and forgive others.
**    May you share freely, never taking more than you give.
**
******************************************************************************
**
** Interfaces to extend FTS5. Using the interfaces defined in this file, 
** FTS5 may be extended with:
**
**     * custom tokenizers, and
**     * custom auxiliary functions.
*/


#ifndef _FTS5_H
#define _FTS5_H


#ifdef __cplusplus
extern "C" {
#endif

/*************************************************************************
** CUSTOM AUXILIARY FUNCTIONS
**
** Virtual table implementations may overload SQL functions by implementing
** the sqlite3_module.xFindFunction() method.
*/

typedef struct Fts5ExtensionApi Fts5ExtensionApi;
typedef struct Fts5Context Fts5Context;
typedef struct Fts5PhraseIter Fts5PhraseIter;

typedef void (*fts5_extension_function)(
  const Fts5ExtensionApi *pApi,   /* API offered by current FTS version */
  Fts5Context *pFts,              /* First arg to pass to pApi functions */
  sqlite3_context *pCtx,          /* Context for returning result/error */
  int nVal,                       /* Number of values in apVal[] array */
  sqlite3_value **apVal           /* Array of trailing arguments */
);

struct Fts5PhraseIter {
  const unsigned char *a;
  const unsigned char *b;
};

/*
** EXTENSION API FUNCTIONS
**
** xUserData(pFts):
**   Return a copy of the context pointer the extension function was 
**   registered with.
**
** xColumnTotalSize(pFts, iCol, pnToken):
**   If parameter iCol is less than zero, set output variable *pnToken
**   to the total number of tokens in the FTS5 table. Or, if iCol is
**   non-negative but less than the number of columns in the table, return
**   the total number of tokens in column iCol, considering all rows in 
**   the FTS5 table.
**
**   If parameter iCol is greater than or equal to the number of columns
**   in the table, SQLITE_RANGE is returned. Or, if an error occurs (e.g.
**   an OOM condition or IO error), an appropriate SQLite error code is 
**   returned.
**
** xColumnCount(pFts):
**   Return the number of columns in the table.
**
** xColumnSize(pFts, iCol, pnToken):
**   If parameter iCol is less than zero, set output variable *pnToken
**   to the total number of tokens in the current row. Or, if iCol is
**   non-negative but less than the number of columns in the table, set
**   *pnToken to the number of tokens in column iCol of the current row.
**
**   If parameter iCol is greater than or equal to the number of columns
**   in the table, SQLITE_RANGE is returned. Or, if an error occurs (e.g.
**   an OOM condition or IO error), an appropriate SQLite error code is 
**   returned.
**
** xColumnText:
**   This function attempts to retrieve the text of column iCol of the
**   current document. If successful, (*pz) is set to point to a buffer
**   containing the text in utf-8 encoding, (*pn) is set to the size in bytes
**   (not characters) of the buffer and SQLITE_OK is returned. Otherwise,
**   if an error occurs, an SQLite error code is returned and the final values
**   of (*pz) and (*pn) are undefined.
**
** xPhraseCount:
**   Returns the number of phrases in the current query expression.
**
** xPhraseSize:
**   Returns the number of tokens in phrase iPhrase of the query. Phrases
**   are numbered starting from zero.
**
** xInstCount:
**   Set *pnInst to the total number of occurrences of all phrases within
**   the query within the current row. Return SQLITE_OK if successful, or
**   an error code (i.e. SQLITE_NOMEM) if an error occurs.
**
** xInst:
**   Query for the details of phrase match iIdx within the current row.
**   Phrase matches are numbered starting from zero, so the iIdx argument
**   should be greater than or equal to zero and smaller than the value
**   output by xInstCount().
**
**   Returns SQLITE_OK if successful, or an error code (i.e. SQLITE_NOMEM) 
**   if an error occurs.
**
** xRowid:
**   Returns the rowid of the current row.
**
** xTokenize:
**   Tokenize text using the tokenizer belonging to the FTS5 table.
**
** xQueryPhrase(pFts5, iPhrase, pUserData, xCallback):
**   This API function is used to query the FTS table for phrase iPhrase
**   of the current query. Specifically, a query equivalent to:
**
**       ... FROM ftstable WHERE ftstable MATCH $p ORDER BY rowid
**
**   with $p set to a phrase equivalent to the phrase iPhrase of the
**   current query is executed. For each row visited, the callback function
**   passed as the fourth argument is invoked. The context and API objects 
**   passed to the callback function may be used to access the properties of
**   each matched row. Invoking Api.xUserData() returns a copy of the pointer
**   passed as the third argument to pUserData.
**
**   If the callback function returns any value other than SQLITE_OK, the
**   query is abandoned and the xQueryPhrase function returns immediately.
**   If the returned value is SQLITE_DONE, xQueryPhrase returns SQLITE_OK.
**   Otherwise, the error code is propagated upwards.
**
**   If the query runs to completion without incident, SQLITE_OK is returned.
**   Or, if some error occurs before the query completes or is aborted by
**   the callback, an SQLite error code is returned.
**
**
** xSetAuxdata(pFts5, pAux, xDelete)
**
**   Save the pointer passed as the second argument as the extension functions 
**   "auxiliary data". The pointer may then be retrieved by the current or any
**   future invocation of the same fts5 extension function made as part of
**   of the same MATCH query using the xGetAuxdata() API.
**
**   Each extension function is allocated a single auxiliary data slot for
**   each FTS query (MATCH expression). If the extension function is invoked 
**   more than once for a single FTS query, then all invocations share a 
**   single auxiliary data context.
**
**   If there is already an auxiliary data pointer when this function is
**   invoked, then it is replaced by the new pointer. If an xDelete callback
**   was specified along with the original pointer, it is invoked at this
**   point.
**
**   The xDelete callback, if one is specified, is also invoked on the
**   auxiliary data pointer after the FTS5 query has finished.
**
**   If an error (e.g. an OOM condition) occurs within this function, an
**   the auxiliary data is set to NULL and an error code returned. If the
**   xDelete parameter was not NULL, it is invoked on the auxiliary data
**   pointer before returning.
**
**
** xGetAuxdata(pFts5, bClear)
**
**   Returns the current auxiliary data pointer for the fts5 extension 
**   function. See the xSetAuxdata() method for details.
**
**   If the bClear argument is non-zero, then the auxiliary data is cleared
**   (set to NULL) before this function returns. In this case the xDelete,
**   if any, is not invoked.
**
**
** xRowCount(pFts5, pnRow)
**
**   This function is used to retrieve the total number of rows in the table.
**   In other words, the same value that would be returned by:
**
**        SELECT count(*) FROM ftstable;
**
** xPhraseFirst()
**   This function is used, along with type Fts5PhraseIter and the xPhraseNext
**   method, to iterate through all instances of a single query phrase within
**   the current row. This is the same information as is accessible via the
**   xInstCount/xInst APIs. While the xInstCount/xInst APIs are more convenient
**   to use, this API may be faster under some circumstances. To iterate 
**   through instances of phrase iPhrase, use the following code:
**
**       Fts5PhraseIter iter;
**       int iCol, iOff;
**       for(pApi->xPhraseFirst(pFts, iPhrase, &iter, &iCol, &iOff);
**           iOff>=0;
**           pApi->xPhraseNext(pFts, &iter, &iCol, &iOff)
**       ){
**         // An instance of phrase iPhrase at offset iOff of column iCol
**       }
**
**   The Fts5PhraseIter structure is defined above. Applications should not
**   modify this structure directly - it should only be used as shown above
**   with the xPhraseFirst() and xPhraseNext() API methods.
**
** xPhraseNext()
**   See xPhraseFirst above.
*/
struct Fts5ExtensionApi {
  int iVersion;                   /* Currently always set to 1 */

  void *(*xUserData)(Fts5Context*);

  int (*xColumnCount)(Fts5Context*);
  int (*xRowCount)(Fts5Context*, sqlite3_int64 *pnRow);
  int (*xColumnTotalSize)(Fts5Context*, int iCol, sqlite3_int64 *pnToken);

  int (*xTokenize)(Fts5Context*, 
    const char *pText, int nText, /* Text to tokenize */
    void *pCtx,                   /* Context passed to xToken() */
    int (*xToken)(void*, int, const char*, int, int, int)       /* Callback */
  );

  int (*xPhraseCount)(Fts5Context*);
  int (*xPhraseSize)(Fts5Context*, int iPhrase);

  int (*xInstCount)(Fts5Context*, int *pnInst);
  int (*xInst)(Fts5Context*, int iIdx, int *piPhrase, int *piCol, int *piOff);

  sqlite3_int64 (*xRowid)(Fts5Context*);
  int (*xColumnText)(Fts5Context*, int iCol, const char **pz, int *pn);
  int (*xColumnSize)(Fts5Context*, int iCol, int *pnToken);

  int (*xQueryPhrase)(Fts5Context*, int iPhrase, void *pUserData,
    int(*)(const Fts5ExtensionApi*,Fts5Context*,void*)
  );
  int (*xSetAuxdata)(Fts5Context*, void *pAux, void(*xDelete)(void*));
  void *(*xGetAuxdata)(Fts5Context*, int bClear);

  void (*xPhraseFirst)(Fts5Context*, int iPhrase, Fts5PhraseIter*, int*, int*);
  void (*xPhraseNext)(Fts5Context*, Fts5PhraseIter*, int *piCol, int *piOff);
};

/* 
** CUSTOM AUXILIARY FUNCTIONS
*************************************************************************/

/*************************************************************************
** CUSTOM TOKENIZERS
**
** Applications may also register custom tokenizer types. A tokenizer 
** is registered by providing fts5 with a populated instance of the 
** following structure. All structure methods must be defined, setting
** any member of the fts5_tokenizer struct to NULL leads to undefined
** behaviour. The structure methods are expected to function as follows:
**
** xCreate:
**   This function is used to allocate and inititalize a tokenizer instance.
**   A tokenizer instance is required to actually tokenize text.
**
**   The first argument passed to this function is a copy of the (void*)
**   pointer provided by the application when the fts5_tokenizer object
**   was registered with FTS5 (the third argument to xCreateTokenizer()). 
**   The second and third arguments are an array of nul-terminated strings
**   containing the tokenizer arguments, if any, specified following the
**   tokenizer name as part of the CREATE VIRTUAL TABLE statement used
**   to create the FTS5 table.
**
**   The final argument is an output variable. If successful, (*ppOut) 
**   should be set to point to the new tokenizer handle and SQLITE_OK
**   returned. If an error occurs, some value other than SQLITE_OK should
**   be returned. In this case, fts5 assumes that the final value of *ppOut 
**   is undefined.
**
** xDelete:
**   This function is invoked to delete a tokenizer handle previously
**   allocated using xCreate(). Fts5 guarantees that this function will
**   be invoked exactly once for each successful call to xCreate().
**
** xTokenize:
**   This function is expected to tokenize the nText byte string indicated 
**   by argument pText. pText may or may not be nul-terminated. The first
**   argument passed to this function is a pointer to an Fts5Tokenizer object
**   returned by an earlier call to xCreate().
**
**   The second argument indicates the reason that FTS5 is requesting
**   tokenization of the supplied text. This is always one of the following
**   four values:
**
**   <ul><li> <b>FTS5_TOKENIZE_DOCUMENT</b> - A document is being inserted into
**            or removed from the FTS table. The tokenizer is being invoked to
**            determine the set of tokens to add to (or delete from) the
**            FTS index.
**
**       <li> <b>FTS5_TOKENIZE_QUERY</b> - A MATCH query is being executed 
**            against the FTS index. The tokenizer is being called to tokenize 
**            a bareword or quoted string specified as part of the query.
**
**       <li> <b>(FTS5_TOKENIZE_QUERY | FTS5_TOKENIZE_PREFIX)</b> - Same as
**            FTS5_TOKENIZE_QUERY, except that the bareword or quoted string is
**            followed by a "*" character, indicating that the last token
**            returned by the tokenizer will be treated as a token prefix.
**
**       <li> <b>FTS5_TOKENIZE_AUX</b> - The tokenizer is being invoked to 
**            satisfy an fts5_api.xTokenize() request made by an auxiliary
**            function. Or an fts5_api.xColumnSize() request made by the same
**            on a columnsize=0 database.  
**   </ul>
**
**   For each token in the input string, the supplied callback xToken() must
**   be invoked. The first argument to it should be a copy of the pointer
**   passed as the second argument to xTokenize(). The third and fourth
**   arguments are a pointer to a buffer containing the token text, and the
**   size of the token in bytes. The 4th and 5th arguments are the byte offsets
**   of the first byte of and first byte immediately following the text from
**   which the token is derived within the input.
**
**   The second argument passed to the xToken() callback ("tflags") should
**   normally be set to 0. The exception is if the tokenizer supports 
**   synonyms. In this case see the discussion below for details.
**
**   FTS5 assumes the xToken() callback is invoked for each token in the 
**   order that they occur within the input text.
**
**   If an xToken() callback returns any value other than SQLITE_OK, then
**   the tokenization should be abandoned and the xTokenize() method should
**   immediately return a copy of the xToken() return value. Or, if the
**   input buffer is exhausted, xTokenize() should return SQLITE_OK. Finally,
**   if an error occurs with the xTokenize() implementation itself, it
**   may abandon the tokenization and return any error code other than
**   SQLITE_OK or SQLITE_DONE.
**
** SYNONYM SUPPORT
**
**   Custom tokenizers may also support synonyms. Consider a case in which a
**   user wishes to query for a phrase such as "first place". Using the 
**   built-in tokenizers, the FTS5 query 'first + place' will match instances
**   of "first place" within the document set, but not alternative forms
**   such as "1st place". In some applications, it would be better to match
**   all instances of "first place" or "1st place" regardless of which form
**   the user specified in the MATCH query text.
**
**   There are several ways to approach this in FTS5:
**
**   <ol><li> By mapping all synonyms to a single token. In this case, the 
**            In the above example, this means that the tokenizer returns the
**            same token for inputs "first" and "1st". Say that token is in
**            fact "first", so that when the user inserts the document "I won
**            1st place" entries are added to the index for tokens "i", "won",
**            "first" and "place". If the user then queries for '1st + place',
**            the tokenizer substitutes "first" for "1st" and the query works
**            as expected.
**
**       <li> By adding multiple synonyms for a single term to the FTS index.
**            In this case, when tokenizing query text, the tokenizer may 
**            provide multiple synonyms for a single term within the document.
**            FTS5 then queries the index for each synonym individually. For
**            example, faced with the query:
**
**   <codeblock>
**     ... MATCH 'first place'</codeblock>
**
**            the tokenizer offers both "1st" and "first" as synonyms for the
**            first token in the MATCH query and FTS5 effectively runs a query 
**            similar to:
**
**   <codeblock>
**     ... MATCH '(first OR 1st) place'</codeblock>
**
**            except that, for the purposes of auxiliary functions, the query
**            still appears to contain just two phrases - "(first OR 1st)" 
**            being treated as a single phrase.
**
**       <li> By adding multiple synonyms for a single term to the FTS index.
**            Using this method, when tokenizing document text, the tokenizer
**            provides multiple synonyms for each token. So that when a 
**            document such as "I won first place" is tokenized, entries are
**            added to the FTS index for "i", "won", "first", "1st" and
**            "place".
**
**            This way, even if the tokenizer does not provide synonyms
**            when tokenizing query text (it should not - to do would be
**            inefficient), it doesn't matter if the user queries for 
**            'first + place' or '1st + place', as there are entires in the
**            FTS index corresponding to both forms of the first token.
**   </ol>
**
**   Whether it is parsing document or query text, any call to xToken that
**   specifies a <i>tflags</i> argument with the FTS5_TOKEN_COLOCATED bit
**   is considered to supply a synonym for the previous token. For example,
**   when parsing the document "I won first place", a tokenizer that supports
**   synonyms would call xToken() 5 times, as follows:
**
**   <codeblock>
**       xToken(pCtx, 0, "i",                      1,  0,  1);
**       xToken(pCtx, 0, "won",                    3,  2,  5);
**       xToken(pCtx, 0, "first",                  5,  6, 11);
**       xToken(pCtx, FTS5_TOKEN_COLOCATED, "1st", 3,  6, 11);
**       xToken(pCtx, 0, "place",                  5, 12, 17);
**</codeblock>
**
**   It is an error to specify the FTS5_TOKEN_COLOCATED flag the first time
**   xToken() is called. Multiple synonyms may be specified for a single token
**   by making multiple calls to xToken(FTS5_TOKEN_COLOCATED) in sequence. 
**   There is no limit to the number of synonyms that may be provided for a
**   single token.
**
**   In many cases, method (1) above is the best approach. It does not add 
**   extra data to the FTS index or require FTS5 to query for multiple terms,
**   so it is efficient in terms of disk space and query speed. However, it
**   does not support prefix queries very well. If, as suggested above, the
**   token "first" is subsituted for "1st" by the tokenizer, then the query:
**
**   <codeblock>
**     ... MATCH '1s*'</codeblock>
**
**   will not match documents that contain the token "1st" (as the tokenizer
**   will probably not map "1s" to any prefix of "first").
**
**   For full prefix support, method (3) may be preferred. In this case, 
**   because the index contains entries for both "first" and "1st", prefix
**   queries such as 'fi*' or '1s*' will match correctly. However, because
**   extra entries are added to the FTS index, this method uses more space
**   within the database.
**
**   Method (2) offers a midpoint between (1) and (3). Using this method,
**   a query such as '1s*' will match documents that contain the literal 
**   token "1st", but not "first" (assuming the tokenizer is not able to
**   provide synonyms for prefixes). However, a non-prefix query like '1st'
**   will match against "1st" and "first". This method does not require
**   extra disk space, as no extra entries are added to the FTS index. 
**   On the other hand, it may require more CPU cycles to run MATCH queries,
**   as separate queries of the FTS index are required for each synonym.
**
**   When using methods (2) or (3), it is important that the tokenizer only
**   provide synonyms when tokenizing document text (method (2)) or query
**   text (method (3)), not both. Doing so will not cause any errors, but is
**   inefficient.
*/
typedef struct Fts5Tokenizer Fts5Tokenizer;
typedef struct fts5_tokenizer fts5_tokenizer;
struct fts5_tokenizer {
  int (*xCreate)(void*, const char **azArg, int nArg, Fts5Tokenizer **ppOut);
  void (*xDelete)(Fts5Tokenizer*);
  int (*xTokenize)(Fts5Tokenizer*, 
      void *pCtx,
      int flags,            /* Mask of FTS5_TOKENIZE_* flags */
      const char *pText, int nText, 
      int (*xToken)(
        void *pCtx,         /* Copy of 2nd argument to xTokenize() */
        int tflags,         /* Mask of FTS5_TOKEN_* flags */
        const char *pToken, /* Pointer to buffer containing token */
        int nToken,         /* Size of token in bytes */
        int iStart,         /* Byte offset of token within input text */
        int iEnd            /* Byte offset of end of token within input text */
      )
  );
};

/* Flags that may be passed as the third argument to xTokenize() */
#define FTS5_TOKENIZE_QUERY     0x0001
#define FTS5_TOKENIZE_PREFIX    0x0002
#define FTS5_TOKENIZE_DOCUMENT  0x0004
#define FTS5_TOKENIZE_AUX       0x0008

/* Flags that may be passed by the tokenizer implementation back to FTS5
** as the third argument to the supplied xToken callback. */
#define FTS5_TOKEN_COLOCATED    0x0001      /* Same position as prev. token */

/*
** END OF CUSTOM TOKENIZERS
*************************************************************************/

/*************************************************************************
** FTS5 EXTENSION REGISTRATION API
*/
typedef struct fts5_api fts5_api;
struct fts5_api {
  int iVersion;                   /* Currently always set to 2 */

  /* Create a new tokenizer */
  int (*xCreateTokenizer)(
    fts5_api *pApi,
    const char *zName,
    void *pContext,
    fts5_tokenizer *pTokenizer,
    void (*xDestroy)(void*)
  );

  /* Find an existing tokenizer */
  int (*xFindTokenizer)(
    fts5_api *pApi,
    const char *zName,
    void **ppContext,
    fts5_tokenizer *pTokenizer
  );

  /* Create a new auxiliary function */
  int (*xCreateFunction)(
    fts5_api *pApi,
    const char *zName,
    void *pContext,
    fts5_extension_function xFunction,
    void (*xDestroy)(void*)
  );
};

/*
** END OF REGISTRATION API
*************************************************************************/

#ifdef __cplusplus
}  /* end of the 'extern "C"' block */
#endif

#endif /* _FTS5_H */


Changes to src/stash.c.
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
**
**  fossil stash list ?-v|--verbose?
**  fossil stash ls ?-v|--verbose?
**
**     List all changes sets currently stashed.  Show information about
**     individual files in each changeset if -v or --verbose is used.
**
**  fossil stash show ?STASHID? ?DIFF-FLAGS?
**
**     Show the content of a stash
**
**  fossil stash pop
**  fossil stash apply ?STASHID?
**
**     Apply STASHID or the most recently create stash to the current







|







426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
**
**  fossil stash list ?-v|--verbose?
**  fossil stash ls ?-v|--verbose?
**
**     List all changes sets currently stashed.  Show information about
**     individual files in each changeset if -v or --verbose is used.
**
**  fossil stash show|cat ?STASHID? ?DIFF-FLAGS?
**
**     Show the content of a stash
**
**  fossil stash pop
**  fossil stash apply ?STASHID?
**
**     Apply STASHID or the most recently create stash to the current
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
**     directory would be if STASHID were applied.
**
** SUMMARY:
**  fossil stash
**  fossil stash save ?-m|--comment COMMENT? ?FILES...?
**  fossil stash snapshot ?-m|--comment COMMENT? ?FILES...?
**  fossil stash list|ls  ?-v|--verbose? ?-W|--width <num>?
**  fossil stash show ?STASHID? ?DIFF-OPTIONS?
**  fossil stash pop
**  fossil stash apply ?STASHID?
**  fossil stash goto ?STASHID?
**  fossil stash rm|drop ?STASHID? ?-a|--all?
**  fossil stash [g]diff ?STASHID? ?DIFF-OPTIONS?
*/
void stash_cmd(void){







|







462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
**     directory would be if STASHID were applied.
**
** SUMMARY:
**  fossil stash
**  fossil stash save ?-m|--comment COMMENT? ?FILES...?
**  fossil stash snapshot ?-m|--comment COMMENT? ?FILES...?
**  fossil stash list|ls  ?-v|--verbose? ?-W|--width <num>?
**  fossil stash show|cat ?STASHID? ?DIFF-OPTIONS?
**  fossil stash pop
**  fossil stash apply ?STASHID?
**  fossil stash goto ?STASHID?
**  fossil stash rm|drop ?STASHID? ?-a|--all?
**  fossil stash [g]diff ?STASHID? ?DIFF-OPTIONS?
*/
void stash_cmd(void){
501
502
503
504
505
506
507

508
509
510
511
512
513
514
      Stmt q;
      db_prepare(&q,"SELECT origname FROM stashfile WHERE stashid=%d", stashid);
      while( db_step(&q)==SQLITE_ROW ){
        newArgv[i++] = mprintf("%s%s", g.zLocalRoot, db_column_text(&q, 0));
      }
      db_finalize(&q);
      newArgv[0] = g.argv[0];

      g.argv = newArgv;
      g.argc = nFile+2;
      if( nFile==0 ) return;
    }
    g.argv[1] = "revert";
    revert_cmd();
  }else







>







501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
      Stmt q;
      db_prepare(&q,"SELECT origname FROM stashfile WHERE stashid=%d", stashid);
      while( db_step(&q)==SQLITE_ROW ){
        newArgv[i++] = mprintf("%s%s", g.zLocalRoot, db_column_text(&q, 0));
      }
      db_finalize(&q);
      newArgv[0] = g.argv[0];
      newArgv[1] = 0;
      g.argv = newArgv;
      g.argc = nFile+2;
      if( nFile==0 ) return;
    }
    g.argv[1] = "revert";
    revert_cmd();
  }else
635
636
637
638
639
640
641

642
643
644
645
646
647
648
649





650



651
652
653
654
655
656
657
                  "(SELECT origname FROM stashfile WHERE stashid=%d)",
                  stashid);
    undo_finish();
  }else
  if( memcmp(zCmd, "diff", nCmd)==0
   || memcmp(zCmd, "gdiff", nCmd)==0
   || memcmp(zCmd, "show", nCmd)==0

  ){
    const char *zDiffCmd = 0;
    const char *zBinGlob = 0;
    int fIncludeBinary = 0;
    u64 diffFlags;

    if( find_option("tk",0,0)!=0 ){
      db_close(0);





      diff_tk((zCmd[0]=='s' ? "stash show" : "stash diff"), 3);



      return;
    }
    if( find_option("internal","i",0)==0 ){
      zDiffCmd = diff_command_external(memcmp(zCmd, "gdiff", nCmd)==0);
    }
    diffFlags = diff_options();
    if( find_option("verbose","v",0)!=0 ) diffFlags |= DIFF_VERBOSE;







>








>
>
>
>
>
|
>
>
>







636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
                  "(SELECT origname FROM stashfile WHERE stashid=%d)",
                  stashid);
    undo_finish();
  }else
  if( memcmp(zCmd, "diff", nCmd)==0
   || memcmp(zCmd, "gdiff", nCmd)==0
   || memcmp(zCmd, "show", nCmd)==0
   || memcmp(zCmd, "cat", nCmd)==0
  ){
    const char *zDiffCmd = 0;
    const char *zBinGlob = 0;
    int fIncludeBinary = 0;
    u64 diffFlags;

    if( find_option("tk",0,0)!=0 ){
      db_close(0);
        switch (zCmd[0]) {
        case 's':
        case 'c':
          diff_tk("stash show", 3);
          break;

        default:
          diff_tk("stash diff", 3);
        }
      return;
    }
    if( find_option("internal","i",0)==0 ){
      zDiffCmd = diff_command_external(memcmp(zCmd, "gdiff", nCmd)==0);
    }
    diffFlags = diff_options();
    if( find_option("verbose","v",0)!=0 ) diffFlags |= DIFF_VERBOSE;
Changes to src/statrep.c.
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498

499
500
501
502
503
504
505
506
  int nEventTotal = 0;               /* Total event count */
  int rowClass = 0;                  /* counter for alternating
                                        row colors */
  int nMaxEvents = 1;                /* max number of events for
                                        all rows. */
  Blob userFilter = empty_blob;      /* Optional user=johndoe query string */
  static const char *const daysOfWeek[] = {
  "Monday", "Tuesday", "Wednesday", "Thursday",
  "Friday", "Saturday", "Sunday"
  };

  stats_report_init_view();
  if( zUserName ){
    blob_appendf(&userFilter, "user=%s", zUserName);
  }
  db_prepare(&query,
               "SELECT cast(mtime %% 7 AS INTEGER) dow,"
               "       COUNT(*) AS eventCount"
               "  FROM v_reports"
               " WHERE ifnull(coalesce(euser,user,'')=%Q,1)"
               " GROUP BY dow ORDER BY dow", zUserName);
  @ <h1>Timeline Events (%h(stats_report_label_for_type())) by Day of the Week
  if( zUserName ){
    @ for user %h(zUserName)
  }
  @ </h1>
  db_multi_exec(
    "CREATE TEMP TABLE piechart(amt,label);"
    "INSERT INTO piechart"
    " SELECT count(*), cast(mtime %% 7 AS INT) FROM v_reports"
     " WHERE ifnull(coalesce(euser,user,'')=%Q,1)"
     " GROUP BY 2 ORDER BY 2;"
    "UPDATE piechart SET label = CASE label"
    "  WHEN 0 THEN 'Monday'"
    "  WHEN 1 THEN 'Tuesday'"
    "  WHEN 2 THEN 'Wednesday'"
    "  WHEN 3 THEN 'Thursday'"
    "  WHEN 4 THEN 'Friday'"
    "  WHEN 5 THEN 'Saturday'"

    "  ELSE 'Sunday' END;", zUserName
  );
  if( db_int(0, "SELECT count(*) FROM piechart")>=2 ){
    @ <center><svg width=700 height=400>
    piechart_render(700, 400, PIE_OTHER|PIE_PERCENT);
    @ </svg></centre><hr/>
  }
  @ <table class='statistics-report-table-events' border='0'







|
|







|












|



|
|
|
|
|
|
>
|







460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
  int nEventTotal = 0;               /* Total event count */
  int rowClass = 0;                  /* counter for alternating
                                        row colors */
  int nMaxEvents = 1;                /* max number of events for
                                        all rows. */
  Blob userFilter = empty_blob;      /* Optional user=johndoe query string */
  static const char *const daysOfWeek[] = {
  "Sunday", "Monday", "Tuesday", "Wednesday",
  "Thursday", "Friday", "Saturday"
  };

  stats_report_init_view();
  if( zUserName ){
    blob_appendf(&userFilter, "user=%s", zUserName);
  }
  db_prepare(&query,
               "SELECT cast(strftime('%%w', mtime) AS INTEGER) dow,"
               "       COUNT(*) AS eventCount"
               "  FROM v_reports"
               " WHERE ifnull(coalesce(euser,user,'')=%Q,1)"
               " GROUP BY dow ORDER BY dow", zUserName);
  @ <h1>Timeline Events (%h(stats_report_label_for_type())) by Day of the Week
  if( zUserName ){
    @ for user %h(zUserName)
  }
  @ </h1>
  db_multi_exec(
    "CREATE TEMP TABLE piechart(amt,label);"
    "INSERT INTO piechart"
    " SELECT count(*), cast(strftime('%%w', mtime) AS INT) FROM v_reports"
     " WHERE ifnull(coalesce(euser,user,'')=%Q,1)"
     " GROUP BY 2 ORDER BY 2;"
    "UPDATE piechart SET label = CASE label"
    "  WHEN 0 THEN 'Sunday'"
    "  WHEN 1 THEN 'Monday'"
    "  WHEN 2 THEN 'Tuesday'"
    "  WHEN 3 THEN 'Wednesday'"
    "  WHEN 4 THEN 'Thursday'"
    "  WHEN 5 THEN 'Friday'"
    "  WHEN 6 THEN 'Saturday'"
    "  ELSE 'ERROR' END;", zUserName
  );
  if( db_int(0, "SELECT count(*) FROM piechart")>=2 ){
    @ <center><svg width=700 height=400>
    piechart_render(700, 400, PIE_OTHER|PIE_PERCENT);
    @ </svg></centre><hr/>
  }
  @ <table class='statistics-report-table-events' border='0'
Changes to src/style.c.
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
*/
static void url_var(
  const char *zVarPrefix,
  const char *zConfigName,
  const char *zPageName
){
  char *zVarName = mprintf("%s_url", zVarPrefix);
  char *zUrl = mprintf("%s/%s?id=%x", g.zTop, zPageName,
                       skin_id(zConfigName));
  Th_Store(zVarName, zUrl);
  free(zUrl);
  free(zVarName);
}

/*







|







358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
*/
static void url_var(
  const char *zVarPrefix,
  const char *zConfigName,
  const char *zPageName
){
  char *zVarName = mprintf("%s_url", zVarPrefix);
  char *zUrl = mprintf("%R/%s?id=%x", zPageName,
                       skin_id(zConfigName));
  Th_Store(zVarName, zUrl);
  free(zUrl);
  free(zVarName);
}

/*
Changes to src/th_lang.c.
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734


735
736
737
738
739
740
741
742
743






















744
745
746
747
748
749
750
** TH Syntax:
**
**   string is CLASS STRING
*/
static int string_is_command(
  Th_Interp *interp, void *ctx, int argc, const char **argv, int *argl
){
  int i;
  int iRes = 1;
  if( argc!=4 ){
    return Th_WrongNumArgs(interp, "string is class string");
  }
  if( argl[2]!=5 || 0!=memcmp(argv[2], "alnum", 5) ){
    Th_ErrorMessage(interp, "Expected alnum, got: ", argv[2], argl[2]);
    return TH_ERROR;


  }

  for(i=0; i<argl[3]; i++){
    if( !th_isalnum(argv[3][i]) ){
      iRes = 0;
    }
  }

  return Th_SetResultInt(interp, iRes);






















}

/*
** TH Syntax:
**
**   string last NEEDLE HAYSTACK
*/







<
<



|
<
<
>
>
|
<
|
|
|
|
|

|
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>







720
721
722
723
724
725
726


727
728
729
730


731
732
733

734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
** TH Syntax:
**
**   string is CLASS STRING
*/
static int string_is_command(
  Th_Interp *interp, void *ctx, int argc, const char **argv, int *argl
){


  if( argc!=4 ){
    return Th_WrongNumArgs(interp, "string is class string");
  }
  if( argl[2]==5 && 0==memcmp(argv[2], "alnum", 5) ){


    int i;
    int iRes = 1;


    for(i=0; i<argl[3]; i++){
      if( !th_isalnum(argv[3][i]) ){
        iRes = 0;
      }
    }

    return Th_SetResultInt(interp, iRes);
  }else if( argl[2]==6 && 0==memcmp(argv[2], "double", 6) ){
    double fVal;
    if( Th_ToDouble(interp, argv[3], argl[3], &fVal)==TH_OK ){
      return Th_SetResultInt(interp, 1);
    }
    return Th_SetResultInt(interp, 0);
  }else if( argl[2]==7 && 0==memcmp(argv[2], "integer", 7) ){
    int iVal;
    if( Th_ToInt(interp, argv[3], argl[3], &iVal)==TH_OK ){
      return Th_SetResultInt(interp, 1);
    }
    return Th_SetResultInt(interp, 0);
  }else if( argl[2]==4 && 0==memcmp(argv[2], "list", 4) ){
    if( Th_SplitList(interp, argv[3], argl[3], 0, 0, 0)==TH_OK ){
      return Th_SetResultInt(interp, 1);
    }
    return Th_SetResultInt(interp, 0);
  }else{
    Th_ErrorMessage(interp,
        "Expected alnum, double, integer, or list, got:", argv[2], argl[2]);
    return TH_ERROR;
  }
}

/*
** TH Syntax:
**
**   string last NEEDLE HAYSTACK
*/
Changes to src/th_main.c.
140
141
142
143
144
145
146





























































































147
148
149
150
151
152
153
void Th_PrintTraceLog(){
  if( g.thTrace ){
    fossil_print("\n------------------ BEGIN TRACE LOG ------------------\n");
    fossil_print("%s", blob_str(&g.thLog));
    fossil_print("\n------------------- END TRACE LOG -------------------\n");
  }
}






























































































/*
** TH1 command: httpize STRING
**
** Escape all characters of STRING which have special meaning in URI
** components. Return a new string result.
*/







>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>







140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
void Th_PrintTraceLog(){
  if( g.thTrace ){
    fossil_print("\n------------------ BEGIN TRACE LOG ------------------\n");
    fossil_print("%s", blob_str(&g.thLog));
    fossil_print("\n------------------- END TRACE LOG -------------------\n");
  }
}

/*
** - adopted from ls_cmd_rev in checkin.c
** - adopted commands/error handling for usage within th1
** - interface adopted to allow result creation as TH1 List
**
** Takes a checkin identifier in zRev and an optiona glob pattern in zGLOB
** as parameter returns a TH list in pzList,pnList with filenames matching
** glob pattern with the checking
*/
static void dir_cmd_rev(
  Th_Interp *interp,
  char **pzList,
  int *pnList,
  const char *zRev,  /* Revision string given */
  const char *zGlob, /* Glob pattern given */
  int bDetails
){
  Stmt q;
  char *zOrderBy = "pathname COLLATE nocase";
  int rid;

  rid = th1_name_to_typed_rid(interp, zRev, "ci");
  compute_fileage(rid, zGlob);
  db_prepare(&q,
    "SELECT datetime(fileage.mtime, 'localtime'), fileage.pathname,\n"
    "       blob.size\n"
    "  FROM fileage, blob\n"
    " WHERE blob.rid=fileage.fid \n"
    " ORDER BY %s;", zOrderBy /*safe-for-%s*/
  );
  while( db_step(&q)==SQLITE_ROW ){
    const char *zFile = db_column_text(&q, 1);
    if( bDetails ){
      const char *zTime = db_column_text(&q, 0);
      int size = db_column_int(&q, 2);
      char zSize[50];
      char *zSubList = 0;
      int nSubList = 0;
      sqlite3_snprintf(sizeof(zSize), zSize, "%d", size);
      Th_ListAppend(interp, &zSubList, &nSubList, zFile, -1);
      Th_ListAppend(interp, &zSubList, &nSubList, zSize, -1);
      Th_ListAppend(interp, &zSubList, &nSubList, zTime, -1);
      Th_ListAppend(interp, pzList, pnList, zSubList, -1);
      Th_Free(interp, zSubList);
    }else{
      Th_ListAppend(interp, pzList, pnList, zFile, -1);
    }
  }
  db_finalize(&q);
}

/*
** TH1 command: dir CHECKIN ?GLOB? ?DETAILS?
**
** Returns a list containing all files in CHECKIN. If GLOB is given only
** the files matching the pattern GLOB within CHECKIN will be returned.
** If DETAILS is non-zero, the result will be a list-of-lists, with each
** element containing at least three elements: the file name, the file
** size (in bytes), and the file last modification time (relative to the
** time zone configured for the repository).
*/
static int dirCmd(
  Th_Interp *interp,
  void *ctx,
  int argc,
  const char **argv,
  int *argl
){
  const char *zGlob = 0;
  int bDetails = 0;

  if( argc<2 || argc>4 ){
    return Th_WrongNumArgs(interp, "dir CHECKIN ?GLOB? ?DETAILS?");
  }
  if( argc>=3 ){
    zGlob = argv[2];
  }
  if( argc>=4 && Th_ToInt(interp, argv[3], argl[3], &bDetails) ){
    return TH_ERROR;
  }
  if( Th_IsRepositoryOpen() ){
    char *zList = 0;
    int nList = 0;
    dir_cmd_rev(interp, &zList, &nList, argv[1], zGlob, bDetails);
    Th_SetResult(interp, zList, nList);
    Th_Free(interp, zList);
    return TH_OK;
  }else{
    Th_SetResult(interp, "repository unavailable", -1);
    return TH_ERROR;
  }
}

/*
** TH1 command: httpize STRING
**
** Escape all characters of STRING which have special meaning in URI
** components. Return a new string result.
*/
335
336
337
338
339
340
341






























342
343
344
345
346
347
348
){
  if( argc!=2 ){
    return Th_WrongNumArgs(interp, "puts STRING");
  }
  sendText((char*)argv[1], argl[1], *(unsigned int*)pConvert);
  return TH_OK;
}































/*
** TH1 command: decorate STRING
** TH1 command: wiki STRING
**
** Render the input string as wiki.  For the decorate command, only links
** are handled.







>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>







428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
){
  if( argc!=2 ){
    return Th_WrongNumArgs(interp, "puts STRING");
  }
  sendText((char*)argv[1], argl[1], *(unsigned int*)pConvert);
  return TH_OK;
}

/*
** TH1 command: markdown STRING
**
** Renders the input string as markdown.  The result is a two-element list.
** The first element is the text-only title string.  The second element
** contains the body, rendered as HTML.
*/
static int markdownCmd(
  Th_Interp *interp,
  void *p,
  int argc,
  const char **argv,
  int *argl
){
  Blob src, title, body;
  char *zValue = 0;
  int nValue = 0;
  if( argc!=2 ){
    return Th_WrongNumArgs(interp, "markdown STRING");
  }
  blob_zero(&src);
  blob_init(&src, (char*)argv[1], argl[1]);
  blob_zero(&title); blob_zero(&body);
  markdown_to_html(&src, &title, &body);
  Th_ListAppend(interp, &zValue, &nValue, blob_str(&title), blob_size(&title));
  Th_ListAppend(interp, &zValue, &nValue, blob_str(&body), blob_size(&body));
  Th_SetResult(interp, zValue, nValue);
  return TH_OK;
}

/*
** TH1 command: decorate STRING
** TH1 command: wiki STRING
**
** Render the input string as wiki.  For the decorate command, only links
** are handled.
385
386
387
388
389
390
391






















392
393
394
395
396
397
398
    return Th_WrongNumArgs(interp, "htmlize STRING");
  }
  zOut = htmlize((char*)argv[1], argl[1]);
  Th_SetResult(interp, zOut, -1);
  free(zOut);
  return TH_OK;
}























/*
** TH1 command: date
**
** Return a string which is the current time and date.  If the
** -local option is used, the date appears using localtime instead
** of UTC.







>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>







508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
    return Th_WrongNumArgs(interp, "htmlize STRING");
  }
  zOut = htmlize((char*)argv[1], argl[1]);
  Th_SetResult(interp, zOut, -1);
  free(zOut);
  return TH_OK;
}

/*
** TH1 command: encode64 STRING
**
** Encode the specified string using Base64 and return the result.
*/
static int encode64Cmd(
  Th_Interp *interp,
  void *p,
  int argc,
  const char **argv,
  int *argl
){
  char *zOut;
  if( argc!=2 ){
    return Th_WrongNumArgs(interp, "encode64 STRING");
  }
  zOut = encode64((char*)argv[1], argl[1]);
  Th_SetResult(interp, zOut, -1);
  free(zOut);
  return TH_OK;
}

/*
** TH1 command: date
**
** Return a string which is the current time and date.  If the
** -local option is used, the date appears using localtime instead
** of UTC.
506
507
508
509
510
511
512

513
514
515
516
517
518
519
520
521
522
523


524
525
526
527
528
529
530
** TH1 command: hasfeature STRING
**
** Return true if the fossil binary has the given compile-time feature
** enabled. The set of features includes:
**
** "ssl"             = FOSSIL_ENABLE_SSL
** "legacyMvRm"      = FOSSIL_ENABLE_LEGACY_MV_RM

** "th1Docs"         = FOSSIL_ENABLE_TH1_DOCS
** "th1Hooks"        = FOSSIL_ENABLE_TH1_HOOKS
** "tcl"             = FOSSIL_ENABLE_TCL
** "useTclStubs"     = USE_TCL_STUBS
** "tclStubs"        = FOSSIL_ENABLE_TCL_STUBS
** "tclPrivateStubs" = FOSSIL_ENABLE_TCL_PRIVATE_STUBS
** "json"            = FOSSIL_ENABLE_JSON
** "markdown"        = FOSSIL_ENABLE_MARKDOWN
** "unicodeCmdLine"  = !BROKEN_MINGW_CMDLINE
** "dynamicBuild"    = FOSSIL_DYNAMIC_BUILD
**


*/
static int hasfeatureCmd(
  Th_Interp *interp,
  void *p,
  int argc,
  const char **argv,
  int *argl







>











>
>







651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
** TH1 command: hasfeature STRING
**
** Return true if the fossil binary has the given compile-time feature
** enabled. The set of features includes:
**
** "ssl"             = FOSSIL_ENABLE_SSL
** "legacyMvRm"      = FOSSIL_ENABLE_LEGACY_MV_RM
** "execRelPaths"    = FOSSIL_ENABLE_EXEC_REL_PATHS
** "th1Docs"         = FOSSIL_ENABLE_TH1_DOCS
** "th1Hooks"        = FOSSIL_ENABLE_TH1_HOOKS
** "tcl"             = FOSSIL_ENABLE_TCL
** "useTclStubs"     = USE_TCL_STUBS
** "tclStubs"        = FOSSIL_ENABLE_TCL_STUBS
** "tclPrivateStubs" = FOSSIL_ENABLE_TCL_PRIVATE_STUBS
** "json"            = FOSSIL_ENABLE_JSON
** "markdown"        = FOSSIL_ENABLE_MARKDOWN
** "unicodeCmdLine"  = !BROKEN_MINGW_CMDLINE
** "dynamicBuild"    = FOSSIL_DYNAMIC_BUILD
**
** Specifying an unknown feature will return a value of false, it will not
** raise a script error.
*/
static int hasfeatureCmd(
  Th_Interp *interp,
  void *p,
  int argc,
  const char **argv,
  int *argl
543
544
545
546
547
548
549





550
551
552
553
554
555
556
    rc = 1;
  }
#endif
#if defined(FOSSIL_ENABLE_LEGACY_MV_RM)
  else if( 0 == fossil_strnicmp( zArg, "legacyMvRm\0", 11 ) ){
    rc = 1;
  }





#endif
#if defined(FOSSIL_ENABLE_TH1_DOCS)
  else if( 0 == fossil_strnicmp( zArg, "th1Docs\0", 8 ) ){
    rc = 1;
  }
#endif
#if defined(FOSSIL_ENABLE_TH1_HOOKS)







>
>
>
>
>







691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
    rc = 1;
  }
#endif
#if defined(FOSSIL_ENABLE_LEGACY_MV_RM)
  else if( 0 == fossil_strnicmp( zArg, "legacyMvRm\0", 11 ) ){
    rc = 1;
  }
#endif
#if defined(FOSSIL_ENABLE_EXEC_REL_PATHS)
  else if( 0 == fossil_strnicmp( zArg, "execRelPaths\0", 13 ) ){
    rc = 1;
  }
#endif
#if defined(FOSSIL_ENABLE_TH1_DOCS)
  else if( 0 == fossil_strnicmp( zArg, "th1Docs\0", 8 ) ){
    rc = 1;
  }
#endif
#if defined(FOSSIL_ENABLE_TH1_HOOKS)
1600
1601
1602
1603
1604
1605
1606

1607

1608
1609
1610
1611
1612
1613
1614
1615
1616
1617

1618
1619
1620
1621
1622
1623
1624
    {"anoncap",       hascapCmd,            (void*)&anonFlag},
    {"anycap",        anycapCmd,            0},
    {"artifact",      artifactCmd,          0},
    {"checkout",      checkoutCmd,          0},
    {"combobox",      comboboxCmd,          0},
    {"date",          dateCmd,              0},
    {"decorate",      wikiCmd,              (void*)&aFlags[2]},

    {"enable_output", enableOutputCmd,      0},

    {"getParameter",  getParameterCmd,      0},
    {"glob_match",    globMatchCmd,         0},
    {"globalState",   globalStateCmd,       0},
    {"httpize",       httpizeCmd,           0},
    {"hascap",        hascapCmd,            (void*)&zeroInt},
    {"hasfeature",    hasfeatureCmd,        0},
    {"html",          putsCmd,              (void*)&aFlags[0]},
    {"htmlize",       htmlizeCmd,           0},
    {"http",          httpCmd,              0},
    {"linecount",     linecntCmd,           0},

    {"puts",          putsCmd,              (void*)&aFlags[1]},
    {"query",         queryCmd,             0},
    {"randhex",       randhexCmd,           0},
    {"regexp",        regexpCmd,            0},
    {"reinitialize",  reinitializeCmd,      0},
    {"render",        renderCmd,            0},
    {"repository",    repositoryCmd,        0},







>

>










>







1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
    {"anoncap",       hascapCmd,            (void*)&anonFlag},
    {"anycap",        anycapCmd,            0},
    {"artifact",      artifactCmd,          0},
    {"checkout",      checkoutCmd,          0},
    {"combobox",      comboboxCmd,          0},
    {"date",          dateCmd,              0},
    {"decorate",      wikiCmd,              (void*)&aFlags[2]},
    {"dir",           dirCmd,               0},
    {"enable_output", enableOutputCmd,      0},
    {"encode64",      encode64Cmd,          0},
    {"getParameter",  getParameterCmd,      0},
    {"glob_match",    globMatchCmd,         0},
    {"globalState",   globalStateCmd,       0},
    {"httpize",       httpizeCmd,           0},
    {"hascap",        hascapCmd,            (void*)&zeroInt},
    {"hasfeature",    hasfeatureCmd,        0},
    {"html",          putsCmd,              (void*)&aFlags[0]},
    {"htmlize",       htmlizeCmd,           0},
    {"http",          httpCmd,              0},
    {"linecount",     linecntCmd,           0},
    {"markdown",      markdownCmd,          0},
    {"puts",          putsCmd,              (void*)&aFlags[1]},
    {"query",         queryCmd,             0},
    {"randhex",       randhexCmd,           0},
    {"regexp",        regexpCmd,            0},
    {"reinitialize",  reinitializeCmd,      0},
    {"render",        renderCmd,            0},
    {"repository",    repositoryCmd,        0},
Changes to src/th_tcl.c.
22
23
24
25
26
27
28








29
30
31
32
33
34
35

#ifdef FOSSIL_ENABLE_TCL

#include "sqlite3.h"
#include "th.h"
#include "tcl.h"









/*
** These macros are designed to reduce the redundant code required to marshal
** arguments from TH1 to Tcl.
*/
#define USE_ARGV_TO_OBJV() \
  int objc;                \
  Tcl_Obj **objv;          \







>
>
>
>
>
>
>
>







22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43

#ifdef FOSSIL_ENABLE_TCL

#include "sqlite3.h"
#include "th.h"
#include "tcl.h"

/*
** This macro is used to verify that the header version of Tcl meets some
** minimum requirement.
*/
#define MINIMUM_TCL_VERSION(major, minor) \
  ((TCL_MAJOR_VERSION > (major)) || \
   ((TCL_MAJOR_VERSION == (major)) && (TCL_MINOR_VERSION >= (minor))))

/*
** These macros are designed to reduce the redundant code required to marshal
** arguments from TH1 to Tcl.
*/
#define USE_ARGV_TO_OBJV() \
  int objc;                \
  Tcl_Obj **objv;          \
283
284
285
286
287
288
289

290
291
292
293
294
295
296



297
298
299
300
301
302
303

/*
** Is the loaded version of Tcl one where TIP #285 (asynchronous script
** cancellation) is available?  This should return non-zero only for Tcl
** 8.6 and higher.
*/
static int canUseTip285(){

  int major = -1, minor = -1, patchLevel = -1, type = -1;

  Tcl_GetVersion(&major, &minor, &patchLevel, &type);
  if( major<0 || minor<0 || patchLevel<0 || type<0 ){
    return 0; /* NOTE: Invalid version info, assume bad. */
  }
  return (major>8 || (major==8 && minor>=6));



}

/*
** Creates and initializes a Tcl interpreter for use with the specified TH1
** interpreter.  Stores the created Tcl interpreter in the Tcl context supplied
** by the caller.  This must be declared here because quite a few functions in
** this file need to use it before it can be defined.







>







>
>
>







291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315

/*
** Is the loaded version of Tcl one where TIP #285 (asynchronous script
** cancellation) is available?  This should return non-zero only for Tcl
** 8.6 and higher.
*/
static int canUseTip285(){
#if MINIMUM_TCL_VERSION(8, 6)
  int major = -1, minor = -1, patchLevel = -1, type = -1;

  Tcl_GetVersion(&major, &minor, &patchLevel, &type);
  if( major<0 || minor<0 || patchLevel<0 || type<0 ){
    return 0; /* NOTE: Invalid version info, assume bad. */
  }
  return (major>8 || (major==8 && minor>=6));
#else
  return 0;
#endif
}

/*
** Creates and initializes a Tcl interpreter for use with the specified TH1
** interpreter.  Stores the created Tcl interpreter in the Tcl context supplied
** by the caller.  This must be declared here because quite a few functions in
** this file need to use it before it can be defined.
1062
1063
1064
1065
1066
1067
1068

1069
1070
1071

1072
1073
1074
1075
1076
1077
1078
  }
  if( !bWait ) flags |= TCL_DONT_WAIT;
  Tcl_Preserve((ClientData)tclInterp);
  while( Tcl_DoOneEvent(flags) ){
    if( Tcl_InterpDeleted(tclInterp) ){
      break;
    }

    if( useTip285 && Tcl_Canceled(tclInterp, 0)!=TCL_OK ){
      break;
    }

  }
  Tcl_Release((ClientData)tclInterp);
  return rc;
}

/*
** Creates and initializes a Tcl interpreter for use with the specified TH1







>



>







1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
  }
  if( !bWait ) flags |= TCL_DONT_WAIT;
  Tcl_Preserve((ClientData)tclInterp);
  while( Tcl_DoOneEvent(flags) ){
    if( Tcl_InterpDeleted(tclInterp) ){
      break;
    }
#if MINIMUM_TCL_VERSION(8, 6)
    if( useTip285 && Tcl_Canceled(tclInterp, 0)!=TCL_OK ){
      break;
    }
#endif
  }
  Tcl_Release((ClientData)tclInterp);
  return rc;
}

/*
** Creates and initializes a Tcl interpreter for use with the specified TH1
Changes to src/timeline.c.
1190
1191
1192
1193
1194
1195
1196
1197

1198
1199
1200
1201
1202
1203
1204
**    to=UUID          ... to this
**    shortest         ... show only the shortest path
**    uf=FUUID       Show only check-ins that use given file version
**    brbg           Background color from branch name
**    ubg            Background color from user
**    namechng       Show only check-ins that filename changes
**    forks          Show only forks and their children
**    ym=YYYY-MM     Shown only events for the given year/month.

**    datefmt=N      Override the date format
**
** p= and d= can appear individually or together.  If either p= or d=
** appear, then u=, y=, a=, and b= are ignored.
**
** If both a= and b= appear then both upper and lower bounds are honored.
**







|
>







1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
**    to=UUID          ... to this
**    shortest         ... show only the shortest path
**    uf=FUUID       Show only check-ins that use given file version
**    brbg           Background color from branch name
**    ubg            Background color from user
**    namechng       Show only check-ins that filename changes
**    forks          Show only forks and their children
**    ym=YYYY-MM     Show only events for the given year/month.
**    ymd=YYYY-MM-DD Show only events on the given day
**    datefmt=N      Override the date format
**
** p= and d= can appear individually or together.  If either p= or d=
** appear, then u=, y=, a=, and b= are ignored.
**
** If both a= and b= appear then both upper and lower bounds are honored.
**
1221
1222
1223
1224
1225
1226
1227

1228
1229
1230
1231
1232
1233
1234
  const char *zMark = P("m");        /* Mark this event or an event this time */
  const char *zTagName = P("t");     /* Show events with this tag */
  const char *zBrName = P("r");      /* Show events related to this tag */
  const char *zSearch = P("s");      /* Search string */
  const char *zUses = P("uf");       /* Only show check-ins hold this file */
  const char *zYearMonth = P("ym");  /* Show check-ins for the given YYYY-MM */
  const char *zYearWeek = P("yw");   /* Check-ins for YYYY-WW (week-of-year) */

  int useDividers = P("nd")==0;      /* Show dividers if "nd" is missing */
  int renameOnly = P("namechng")!=0; /* Show only check-ins that rename files */
  int forkOnly = PB("forks");        /* Show only forks and their children */
  int tagid;                         /* Tag ID */
  int tmFlags = 0;                   /* Timeline flags */
  const char *zThisTag = 0;          /* Suppress links to this tag */
  const char *zThisUser = 0;         /* Suppress links to this user */







>







1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
  const char *zMark = P("m");        /* Mark this event or an event this time */
  const char *zTagName = P("t");     /* Show events with this tag */
  const char *zBrName = P("r");      /* Show events related to this tag */
  const char *zSearch = P("s");      /* Search string */
  const char *zUses = P("uf");       /* Only show check-ins hold this file */
  const char *zYearMonth = P("ym");  /* Show check-ins for the given YYYY-MM */
  const char *zYearWeek = P("yw");   /* Check-ins for YYYY-WW (week-of-year) */
  const char *zDay = P("ymd");       /* Check-ins for the day YYYY-MM-DD */
  int useDividers = P("nd")==0;      /* Show dividers if "nd" is missing */
  int renameOnly = P("namechng")!=0; /* Show only check-ins that rename files */
  int forkOnly = PB("forks");        /* Show only forks and their children */
  int tagid;                         /* Tag ID */
  int tmFlags = 0;                   /* Timeline flags */
  const char *zThisTag = 0;          /* Suppress links to this tag */
  const char *zThisUser = 0;         /* Suppress links to this user */
1491
1492
1493
1494
1495
1496
1497




1498
1499
1500
1501
1502
1503
1504
      blob_append_sql(&sql, " AND %Q=strftime('%%Y-%%m',event.mtime) ",
                   zYearMonth);
    }
    else if( zYearWeek ){
      blob_append_sql(&sql, " AND %Q=strftime('%%Y-%%W',event.mtime) ",
                   zYearWeek);
    }




    if( tagid ){
      blob_append_sql(&sql,
        " AND (EXISTS(SELECT 1 FROM tagxref"
            " WHERE tagid=%d AND tagtype>0 AND rid=blob.rid)\n", tagid);

      if( zBrName ){
        /* The next two blob_appendf() calls add SQL that causes check-ins that







>
>
>
>







1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
      blob_append_sql(&sql, " AND %Q=strftime('%%Y-%%m',event.mtime) ",
                   zYearMonth);
    }
    else if( zYearWeek ){
      blob_append_sql(&sql, " AND %Q=strftime('%%Y-%%W',event.mtime) ",
                   zYearWeek);
    }
    else if( zDay ){
      blob_append_sql(&sql, " AND %Q=strftime('%%Y-%%m-%%d',event.mtime) ",
                   zDay);
    }
    if( tagid ){
      blob_append_sql(&sql,
        " AND (EXISTS(SELECT 1 FROM tagxref"
            " WHERE tagid=%d AND tagtype>0 AND rid=blob.rid)\n", tagid);

      if( zBrName ){
        /* The next two blob_appendf() calls add SQL that causes check-ins that
1640
1641
1642
1643
1644
1645
1646


1647
1648
1649
1650
1651
1652
1653
    db_multi_exec("%s", blob_sql_text(&sql));

    n = db_int(0, "SELECT count(*) FROM timeline WHERE etype!='div' /*scan*/");
    if( zYearMonth ){
      blob_appendf(&desc, "%s events for %h", zEType, zYearMonth);
    }else if( zYearWeek ){
      blob_appendf(&desc, "%s events for year/week %h", zEType, zYearWeek);


    }else if( zBefore==0 && zCirca==0 && n>=nEntry && nEntry>0 ){
      blob_appendf(&desc, "%d most recent %ss", n, zEType);
    }else{
      blob_appendf(&desc, "%d %ss", n, zEType);
    }
    if( zUses ){
      char *zFilenames = names_of_file(zUses);







>
>







1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
    db_multi_exec("%s", blob_sql_text(&sql));

    n = db_int(0, "SELECT count(*) FROM timeline WHERE etype!='div' /*scan*/");
    if( zYearMonth ){
      blob_appendf(&desc, "%s events for %h", zEType, zYearMonth);
    }else if( zYearWeek ){
      blob_appendf(&desc, "%s events for year/week %h", zEType, zYearWeek);
    }else if( zDay ){
      blob_appendf(&desc, "%s events occurring on %h", zEType, zDay);
    }else if( zBefore==0 && zCirca==0 && n>=nEntry && nEntry>0 ){
      blob_appendf(&desc, "%d most recent %ss", n, zEType);
    }else{
      blob_appendf(&desc, "%d %ss", n, zEType);
    }
    if( zUses ){
      char *zFilenames = names_of_file(zUses);
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
        db_prepare(&fchngQuery,
           "SELECT (pid<=0) AS isnew,"
           "       (fid==0) AS isdel,"
           "       (SELECT name FROM filename WHERE fnid=mlink.fnid) AS name,"
           "       (SELECT uuid FROM blob WHERE rid=fid),"
           "       (SELECT uuid FROM blob WHERE rid=pid)"
           "  FROM mlink"
           " WHERE mid=:mid AND pid!=fid"
           " ORDER BY 3 /*sort*/"
        );
        fchngQueryInit = 1;
      }
      db_bind_int(&fchngQuery, ":mid", rid);
      while( db_step(&fchngQuery)==SQLITE_ROW ){
        const char *zFilename = db_column_text(&fchngQuery, 2);







|







1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
        db_prepare(&fchngQuery,
           "SELECT (pid<=0) AS isnew,"
           "       (fid==0) AS isdel,"
           "       (SELECT name FROM filename WHERE fnid=mlink.fnid) AS name,"
           "       (SELECT uuid FROM blob WHERE rid=fid),"
           "       (SELECT uuid FROM blob WHERE rid=pid)"
           "  FROM mlink"
           " WHERE mid=:mid AND pid!=fid AND NOT mlink.isaux"
           " ORDER BY 3 /*sort*/"
        );
        fchngQueryInit = 1;
      }
      db_bind_int(&fchngQuery, ":mid", rid);
      while( db_step(&fchngQuery)==SQLITE_ROW ){
        const char *zFilename = db_column_text(&fchngQuery, 2);
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221

2222
2223
2224
2225
2226
2227
2228
2229
2230
2231



2232
2233
2234



2235
2236
2237














2238
2239







2240
2241






2242
2243
         db_column_text(&q, 3));
    }
  }
  db_finalize(&q);
}

/*
** WEBPAGE: test_timewarps
**
** Show all check-ins that are "timewarps".  A timewarp is a
** check-in that occurs before its parent, according to the
** timestamp information on the check-in.  This can only actually
** happen, of course, if a users system clock is set incorrectly.
*/
void test_timewarp_page(void){
  Stmt q;


  login_check_credentials();
  if( !g.perm.Read || !g.perm.Hyperlink ){
    login_needed(g.anon.Read && g.anon.Hyperlink);
    return;
  }
  style_header("Instances of timewarp");
  @ <ul>
  db_prepare(&q,
     "SELECT blob.uuid "



     "  FROM plink p, plink c, blob"
     " WHERE p.cid=c.pid  AND p.mtime>c.mtime"
     "   AND blob.rid=c.cid"



  );
  while( db_step(&q)==SQLITE_ROW ){
    const char *zUuid = db_column_text(&q, 0);














    @ <li>
    @ <a href="%R/timeline?dp=%!S(zUuid)&amp;unhide">%S(zUuid)</a>







  }
  db_finalize(&q);






  style_footer();
}







|








>







<

|
>
>
>
|


>
>
>


|
>
>
>
>
>
>
>
>
>
>
>
>
>
>
|
<
>
>
>
>
>
>
>


>
>
>
>
>
>


2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237

2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266

2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
         db_column_text(&q, 3));
    }
  }
  db_finalize(&q);
}

/*
** WEBPAGE: timewarps
**
** Show all check-ins that are "timewarps".  A timewarp is a
** check-in that occurs before its parent, according to the
** timestamp information on the check-in.  This can only actually
** happen, of course, if a users system clock is set incorrectly.
*/
void test_timewarp_page(void){
  Stmt q;
  int cnt = 0;

  login_check_credentials();
  if( !g.perm.Read || !g.perm.Hyperlink ){
    login_needed(g.anon.Read && g.anon.Hyperlink);
    return;
  }
  style_header("Instances of timewarp");

  db_prepare(&q,
     "SELECT blob.uuid, "
     "       date(ce.mtime),"
     "       pe.mtime>ce.mtime,"
     "       coalesce(ce.euser,ce.user)"
     "  FROM plink p, plink c, blob, event pe, event ce"
     " WHERE p.cid=c.pid  AND p.mtime>c.mtime"
     "   AND blob.rid=c.cid"
     "   AND pe.objid=p.cid"
     "   AND ce.objid=c.cid"
     " ORDER BY 2 DESC"
  );
  while( db_step(&q)==SQLITE_ROW ){
    const char *zCkin = db_column_text(&q, 0);
    const char *zDate = db_column_text(&q, 1);
    const char *zStatus = db_column_int(&q,2) ? "Open"
                                 : "Resolved by editing date";
    const char *zUser = db_column_text(&q, 3);
    char *zHref = href("%R/timeline?c=%S", zCkin);
    if( cnt==0 ){
      @ <div class="brlist"><table id="timewarptable">
      @ <thead><tr>
      @ <th>Check-in</th>
      @ <th>Date</th>
      @ <th>User</th>
      @ <th>Status</th>
      @ </tr></thead><tbody>
    }
    @ <tr>

    @ <td>%s(zHref)%S(zCkin)</a></td>
    @ <td>%s(zHref)%s(zDate)</a></td>
    @ <td>%h(zUser)</td>
    @ <td>%s(zStatus)</td>
    @ </tr>
    fossil_free(zHref);
    cnt++;
  }
  db_finalize(&q);
  if( cnt==0 ){
    @ <p>No timewarps in this repository</p>
  }else{
    @ </tbody></table></div>
    output_table_sorting_javascript("timewarptable","tttt",2);
  }
  style_footer();
}
Changes to src/tktsetup.c.
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
@ CREATE INDEX ticketchng_idx1 ON ticketchng(tkt_id, tkt_mtime);
;

/*
** Return the ticket table definition
*/
const char *ticket_table_schema(void){
  return db_get("ticket-table", (char*)zDefaultTicketTable);
}

/*
** Common implementation for the ticket setup editor pages.
*/
static void tktsetup_generic(
  const char *zTitle,           /* Page title */







|







97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
@ CREATE INDEX ticketchng_idx1 ON ticketchng(tkt_id, tkt_mtime);
;

/*
** Return the ticket table definition
*/
const char *ticket_table_schema(void){
  return db_get("ticket-table", zDefaultTicketTable);
}

/*
** Common implementation for the ticket setup editor pages.
*/
static void tktsetup_generic(
  const char *zTitle,           /* Page title */
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
  }
  if( PB("setup") ){
    cgi_redirect("tktsetup");
  }
  isSubmit = P("submit")!=0;
  z = P("x");
  if( z==0 ){
    z = db_get(zDbField, (char*)zDfltValue);
  }
  style_header("Edit %s", zTitle);
  if( P("clear")!=0 ){
    login_verify_csrf_secret();
    db_unset(zDbField, 0);
    if( xRebuild ) xRebuild();
    cgi_redirect("tktsetup");







|







126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
  }
  if( PB("setup") ){
    cgi_redirect("tktsetup");
  }
  isSubmit = P("submit")!=0;
  z = P("x");
  if( z==0 ){
    z = db_get(zDbField, zDfltValue);
  }
  style_header("Edit %s", zTitle);
  if( P("clear")!=0 ){
    login_verify_csrf_secret();
    db_unset(zDbField, 0);
    if( xRebuild ) xRebuild();
    cgi_redirect("tktsetup");
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
@ }
;

/*
** Return the ticket common code.
*/
const char *ticket_common_code(void){
  return db_get("ticket-common", (char*)zDefaultTicketCommon);
}

/*
** WEBPAGE: tktsetup_com
** Administrative page used to define TH1 script that is
** common to all ticket screens.
*/







|







238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
@ }
;

/*
** Return the ticket common code.
*/
const char *ticket_common_code(void){
  return db_get("ticket-common", zDefaultTicketCommon);
}

/*
** WEBPAGE: tktsetup_com
** Administrative page used to define TH1 script that is
** common to all ticket screens.
*/
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
@ return
;

/*
** Return the ticket change code.
*/
const char *ticket_change_code(void){
  return db_get("ticket-change", (char*)zDefaultTicketChange);
}

/*
** WEBPAGE: tktsetup_change
** Adminstrative screen used to view or edit the TH1 script
** that shows ticket changes.
*/







|







270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
@ return
;

/*
** Return the ticket change code.
*/
const char *ticket_change_code(void){
  return db_get("ticket-change", zDefaultTicketChange);
}

/*
** WEBPAGE: tktsetup_change
** Adminstrative screen used to view or edit the TH1 script
** that shows ticket changes.
*/
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
@ </table>
;

/*
** Return the code used to generate the new ticket page
*/
const char *ticket_newpage_code(void){
  return db_get("ticket-newpage", (char*)zDefaultNew);
}

/*
** WEBPAGE: tktsetup_newpage
** Administrative page used to view or edit the TH1 script used
** to enter new tickets.
*/







|







415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
@ </table>
;

/*
** Return the code used to generate the new ticket page
*/
const char *ticket_newpage_code(void){
  return db_get("ticket-newpage", zDefaultNew);
}

/*
** WEBPAGE: tktsetup_newpage
** Administrative page used to view or edit the TH1 script used
** to enter new tickets.
*/
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
;


/*
** Return the code used to generate the view ticket page
*/
const char *ticket_viewpage_code(void){
  return db_get("ticket-viewpage", (char*)zDefaultView);
}

/*
** WEBPAGE: tktsetup_viewpage
** Administrative page used to view or edit the TH1 script that
** displays individual tickets.
*/







|







556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
;


/*
** Return the code used to generate the view ticket page
*/
const char *ticket_viewpage_code(void){
  return db_get("ticket-viewpage", zDefaultView);
}

/*
** WEBPAGE: tktsetup_viewpage
** Administrative page used to view or edit the TH1 script that
** displays individual tickets.
*/
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
@ </table>
;

/*
** Return the code used to generate the edit ticket page
*/
const char *ticket_editpage_code(void){
  return db_get("ticket-editpage", (char*)zDefaultEdit);
}

/*
** WEBPAGE: tktsetup_editpage
** Administrative page for viewing or editing the TH1 script that
** drives the ticket editing page.
*/







|







697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
@ </table>
;

/*
** Return the code used to generate the edit ticket page
*/
const char *ticket_editpage_code(void){
  return db_get("ticket-editpage", zDefaultEdit);
}

/*
** WEBPAGE: tktsetup_editpage
** Administrative page for viewing or editing the TH1 script that
** drives the ticket editing page.
*/
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
@ </th1>
;

/*
** Return the code used to generate the report list
*/
const char *ticket_reportlist_code(void){
  return db_get("ticket-reportlist", (char*)zDefaultReportList);
}

/*
** WEBPAGE: tktsetup_reportlist
** Administrative page used to view or edit the TH1 script that
** defines the "report list" page.
*/







|







753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
@ </th1>
;

/*
** Return the code used to generate the report list
*/
const char *ticket_reportlist_code(void){
  return db_get("ticket-reportlist", zDefaultReportList);
}

/*
** WEBPAGE: tktsetup_reportlist
** Administrative page used to view or edit the TH1 script that
** defines the "report list" page.
*/
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
;


/*
** Return the template ticket report format:
*/
const char *ticket_key_template(void){
  return db_get("ticket-key-template", (char*)zDefaultKey);
}

/*
** WEBPAGE: tktsetup_keytplt
**
** Administrative page used to view or edit the Key template
** for tickets.







|







846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
;


/*
** Return the template ticket report format:
*/
const char *ticket_key_template(void){
  return db_get("ticket-key-template", zDefaultKey);
}

/*
** WEBPAGE: tktsetup_keytplt
**
** Administrative page used to view or edit the Key template
** for tickets.
Changes to src/undo.c.
22
23
24
25
26
27
28

29
30
31
32
33
34
35
36
37

#if INTERFACE
/*
** Possible return values from the undo_maybe_save() routine.
*/
#define UNDO_NONE     (0) /* Placeholder only used to initialize vars. */
#define UNDO_SAVED_OK (1) /* The specified file was saved succesfully. */

#define UNDO_INACTIVE (2) /* File not saved, subsystem is not active. */
#define UNDO_TOOBIG   (3) /* File not saved, it exceeded a size limit. */
#endif

/*
** Undo the change to the file zPathname.  zPathname is the pathname
** of the file relative to the root of the repository.  If redoFlag is
** true the redo a change.  If there is nothing to undo (or redo) then
** this routine is a noop.







>
|
|







22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38

#if INTERFACE
/*
** Possible return values from the undo_maybe_save() routine.
*/
#define UNDO_NONE     (0) /* Placeholder only used to initialize vars. */
#define UNDO_SAVED_OK (1) /* The specified file was saved succesfully. */
#define UNDO_DISABLED (2) /* File not saved, subsystem is disabled. */
#define UNDO_INACTIVE (3) /* File not saved, subsystem is not active. */
#define UNDO_TOOBIG   (4) /* File not saved, it exceeded a size limit. */
#endif

/*
** Undo the change to the file zPathname.  zPathname is the pathname
** of the file relative to the root of the repository.  If redoFlag is
** true the redo a change.  If there is nothing to undo (or redo) then
** this routine is a noop.
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
    old_exists = db_column_int(&q, 1);
    old_exe = db_column_int(&q, 2);
    if( old_exists ){
      db_ephemeral_blob(&q, 0, &new);
    }
    if( old_exists ){
      if( new_exists ){
        fossil_print("%s %s\n", redoFlag ? "REDO" : "UNDO", zPathname);
      }else{
        fossil_print("NEW %s\n", zPathname);
      }
      if( new_exists && (new_link || old_link) ){
        file_delete(zFullname);
      }
      if( old_link ){
        symlink_create(blob_str(&new), zFullname);
      }else{







|

|







73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
    old_exists = db_column_int(&q, 1);
    old_exe = db_column_int(&q, 2);
    if( old_exists ){
      db_ephemeral_blob(&q, 0, &new);
    }
    if( old_exists ){
      if( new_exists ){
        fossil_print("%s   %s\n", redoFlag ? "REDO" : "UNDO", zPathname);
      }else{
        fossil_print("NEW    %s\n", zPathname);
      }
      if( new_exists && (new_link || old_link) ){
        file_delete(zFullname);
      }
      if( old_link ){
        symlink_create(blob_str(&new), zFullname);
      }else{
267
268
269
270
271
272
273

274
275
276
277
278
279
280

/*
** Save the current content of the file zPathname so that it
** will be undoable.  The name is relative to the root of the
** tree.
*/
void undo_save(const char *zPathname){

  if( undo_maybe_save(zPathname, -1)!=UNDO_SAVED_OK ){
    fossil_panic("failed to save undo information for path: %s",
                 zPathname);
  }
}

/*







>







268
269
270
271
272
273
274
275
276
277
278
279
280
281
282

/*
** Save the current content of the file zPathname so that it
** will be undoable.  The name is relative to the root of the
** tree.
*/
void undo_save(const char *zPathname){
  if( undoDisable ) return;
  if( undo_maybe_save(zPathname, -1)!=UNDO_SAVED_OK ){
    fossil_panic("failed to save undo information for path: %s",
                 zPathname);
  }
}

/*
289
290
291
292
293
294
295





296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312

313
314
315
316
317
318
319
**          value less than zero, call the undo_save()
**          function instead.
**
** The return value of this function will always be one of the
** following codes:
**
** UNDO_SAVED_OK: The specified file was saved succesfully.





**
** UNDO_INACTIVE: The specified file was NOT saved, because the
**                "undo subsystem" is not active.  This error
**                may indicate that a call to undo_begin() is
**                missing.
**
**   UNDO_TOOBIG: The specified file was NOT saved, because it
**                exceeded the specified size limit.  It is
**                impossible for this value to be returned if
**                the specified size limit is less than zero
**                (i.e. unlimited).
*/
int undo_maybe_save(const char *zPathname, i64 limit){
  char *zFullname;
  i64 size;
  int result;


  if( !undoActive ) return UNDO_INACTIVE;
  zFullname = mprintf("%s%s", g.zLocalRoot, zPathname);
  size = file_wd_size(zFullname);
  if( limit<0 || size<=limit ){
    int existsFlag = (size>=0);
    int isLink = file_wd_islink(zFullname);
    Stmt q;







>
>
>
>
>

















>







291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
**          value less than zero, call the undo_save()
**          function instead.
**
** The return value of this function will always be one of the
** following codes:
**
** UNDO_SAVED_OK: The specified file was saved succesfully.
**
** UNDO_DISABLED: The specified file was NOT saved, because the
**                "undo subsystem" is disabled.  This error may
**                indicate that a call to undo_disable() was
**                issued.
**
** UNDO_INACTIVE: The specified file was NOT saved, because the
**                "undo subsystem" is not active.  This error
**                may indicate that a call to undo_begin() is
**                missing.
**
**   UNDO_TOOBIG: The specified file was NOT saved, because it
**                exceeded the specified size limit.  It is
**                impossible for this value to be returned if
**                the specified size limit is less than zero
**                (i.e. unlimited).
*/
int undo_maybe_save(const char *zPathname, i64 limit){
  char *zFullname;
  i64 size;
  int result;

  if( undoDisable ) return UNDO_DISABLED;
  if( !undoActive ) return UNDO_INACTIVE;
  zFullname = mprintf("%s%s", g.zLocalRoot, zPathname);
  size = file_wd_size(zFullname);
  if( limit<0 || size<=limit ){
    int existsFlag = (size>=0);
    int isLink = file_wd_islink(zFullname);
    Stmt q;
353
354
355
356
357
358
359

360
361
362
363
364
365
366
*/
const char *undo_save_message(int rc){
  static char zRc[32];

  switch( rc ){
    case UNDO_NONE:     return "undo is disabled for this operation";
    case UNDO_SAVED_OK: return "the save operation was successful";

    case UNDO_INACTIVE: return "the undo subsystem is inactive";
    case UNDO_TOOBIG:   return "the file is too big";
    default: {
      sqlite3_snprintf(sizeof(zRc), zRc, "of error code %d", rc);
    }
  }
  return zRc;







>







361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
*/
const char *undo_save_message(int rc){
  static char zRc[32];

  switch( rc ){
    case UNDO_NONE:     return "undo is disabled for this operation";
    case UNDO_SAVED_OK: return "the save operation was successful";
    case UNDO_DISABLED: return "the undo subsystem is disabled";
    case UNDO_INACTIVE: return "the undo subsystem is inactive";
    case UNDO_TOOBIG:   return "the file is too big";
    default: {
      sqlite3_snprintf(sizeof(zRc), zRc, "of error code %d", rc);
    }
  }
  return zRc;
Changes to src/update.c.
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438

439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
      /* File added in the target. */
      if( file_wd_isfile_or_link(zFullPath) ){
        fossil_print("ADD %s - overwrites an unmanaged file\n", zName);
        nOverwrite++;
      }else{
        fossil_print("ADD %s\n", zName);
      }
      undo_save(zName);
      if( !dryRunFlag ) vfile_to_disk(0, idt, 0, 0);
    }else if( idt>0 && idv>0 && ridt!=ridv && (chnged==0 || deleted) ){
      /* The file is unedited.  Change it to the target version */
      undo_save(zName);
      if( deleted ){
        fossil_print("UPDATE %s - change to unmanaged file\n", zName);
      }else{
        fossil_print("UPDATE %s\n", zName);
      }

      if( !dryRunFlag ) vfile_to_disk(0, idt, 0, 0);
    }else if( idt>0 && idv>0 && !deleted && file_wd_size(zFullPath)<0 ){
      /* The file missing from the local check-out. Restore it to the
      ** version that appears in the target. */
      fossil_print("UPDATE %s\n", zName);
      undo_save(zName);
      if( !dryRunFlag ) vfile_to_disk(0, idt, 0, 0);
    }else if( idt==0 && idv>0 ){
      if( ridv==0 ){
        /* Added in current checkout.  Continue to hold the file as
        ** as an addition */
        db_multi_exec("UPDATE vfile SET vid=%d WHERE id=%d", tid, idv);
      }else if( chnged ){
        /* Edited locally but deleted from the target.  Do not track the
        ** file but keep the edited version around. */
        fossil_print("CONFLICT %s - edited locally but deleted by update\n",
                     zName);
        nConflict++;
      }else{
        fossil_print("REMOVE %s\n", zName);
        undo_save(zName);
        if( !dryRunFlag ) file_delete(zFullPath);
      }
    }else if( idt>0 && idv>0 && ridt!=ridv && chnged ){
      /* Merge the changes in the current tree into the target version */
      Blob r, t, v;
      int rc;
      if( nameChng ){
        fossil_print("MERGE %s -> %s\n", zName, zNewName);
      }else{
        fossil_print("MERGE %s\n", zName);
      }
      if( islinkv || islinkt /* || file_wd_islink(zFullPath) */ ){
        fossil_print("***** Cannot merge symlink %s\n", zNewName);
        nConflict++;
      }else{
        unsigned mergeFlags = dryRunFlag ? MERGE_DRYRUN : 0;
        undo_save(zName);
        content_get(ridt, &t);
        content_get(ridv, &v);
        rc = merge_3way(&v, zFullPath, &t, &r, mergeFlags);
        if( rc>=0 ){
          if( !dryRunFlag ){
            blob_write_to_file(&r, zFullNewPath);
            file_wd_setexe(zFullNewPath, isexe);







|



<





>





|














|
















|







422
423
424
425
426
427
428
429
430
431
432

433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
      /* File added in the target. */
      if( file_wd_isfile_or_link(zFullPath) ){
        fossil_print("ADD %s - overwrites an unmanaged file\n", zName);
        nOverwrite++;
      }else{
        fossil_print("ADD %s\n", zName);
      }
      if( !dryRunFlag && !internalUpdate ) undo_save(zName);
      if( !dryRunFlag ) vfile_to_disk(0, idt, 0, 0);
    }else if( idt>0 && idv>0 && ridt!=ridv && (chnged==0 || deleted) ){
      /* The file is unedited.  Change it to the target version */

      if( deleted ){
        fossil_print("UPDATE %s - change to unmanaged file\n", zName);
      }else{
        fossil_print("UPDATE %s\n", zName);
      }
      if( !dryRunFlag && !internalUpdate ) undo_save(zName);
      if( !dryRunFlag ) vfile_to_disk(0, idt, 0, 0);
    }else if( idt>0 && idv>0 && !deleted && file_wd_size(zFullPath)<0 ){
      /* The file missing from the local check-out. Restore it to the
      ** version that appears in the target. */
      fossil_print("UPDATE %s\n", zName);
      if( !dryRunFlag && !internalUpdate ) undo_save(zName);
      if( !dryRunFlag ) vfile_to_disk(0, idt, 0, 0);
    }else if( idt==0 && idv>0 ){
      if( ridv==0 ){
        /* Added in current checkout.  Continue to hold the file as
        ** as an addition */
        db_multi_exec("UPDATE vfile SET vid=%d WHERE id=%d", tid, idv);
      }else if( chnged ){
        /* Edited locally but deleted from the target.  Do not track the
        ** file but keep the edited version around. */
        fossil_print("CONFLICT %s - edited locally but deleted by update\n",
                     zName);
        nConflict++;
      }else{
        fossil_print("REMOVE %s\n", zName);
        if( !dryRunFlag && !internalUpdate ) undo_save(zName);
        if( !dryRunFlag ) file_delete(zFullPath);
      }
    }else if( idt>0 && idv>0 && ridt!=ridv && chnged ){
      /* Merge the changes in the current tree into the target version */
      Blob r, t, v;
      int rc;
      if( nameChng ){
        fossil_print("MERGE %s -> %s\n", zName, zNewName);
      }else{
        fossil_print("MERGE %s\n", zName);
      }
      if( islinkv || islinkt /* || file_wd_islink(zFullPath) */ ){
        fossil_print("***** Cannot merge symlink %s\n", zNewName);
        nConflict++;
      }else{
        unsigned mergeFlags = dryRunFlag ? MERGE_DRYRUN : 0;
        if( !dryRunFlag && !internalUpdate ) undo_save(zName);
        content_get(ridt, &t);
        content_get(ridv, &v);
        rc = merge_3way(&v, zFullPath, &t, &r, mergeFlags);
        if( rc>=0 ){
          if( !dryRunFlag ){
            blob_write_to_file(&r, zFullNewPath);
            file_wd_setexe(zFullNewPath, isexe);
736
737
738
739
740
741
742

743
744
745
746
747
748
749
  undo_begin();
  db_multi_exec("CREATE TEMP TABLE torevert(name UNIQUE);");

  if( g.argc>2 ){
    for(i=2; i<g.argc; i++){
      Blob fname;
      zFile = mprintf("%/", g.argv[i]);

      file_tree_name(zFile, &fname, 0, 1);
      db_multi_exec(
        "REPLACE INTO torevert VALUES(%B);"
        "INSERT OR IGNORE INTO torevert"
        " SELECT pathname"
        "   FROM vfile"
        "  WHERE origname=%B;",







>







736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
  undo_begin();
  db_multi_exec("CREATE TEMP TABLE torevert(name UNIQUE);");

  if( g.argc>2 ){
    for(i=2; i<g.argc; i++){
      Blob fname;
      zFile = mprintf("%/", g.argv[i]);
      blob_zero(&fname);
      file_tree_name(zFile, &fname, 0, 1);
      db_multi_exec(
        "REPLACE INTO torevert VALUES(%B);"
        "INSERT OR IGNORE INTO torevert"
        " SELECT pathname"
        "   FROM vfile"
        "  WHERE origname=%B;",
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
    zFile = db_column_text(&q, 0);
    zFull = mprintf("%/%/", g.zLocalRoot, zFile);
    errCode = historical_version_of_file(zRevision, zFile, &record,
                                         &isLink, &isExe, 0, 2);
    if( errCode==2 ){
      if( db_int(0, "SELECT rid FROM vfile WHERE pathname=%Q OR origname=%Q",
                 zFile, zFile)==0 ){
        fossil_print("UNMANAGE: %s\n", zFile);
      }else{
        undo_save(zFile);
        file_delete(zFull);
        fossil_print("DELETE: %s\n", zFile);
      }
      db_multi_exec(
        "UPDATE OR REPLACE vfile"
        "   SET pathname=origname, origname=NULL"
        " WHERE pathname=%Q AND origname!=pathname;"
        "DELETE FROM vfile WHERE pathname=%Q",
        zFile, zFile
      );
    }else{
      sqlite3_int64 mtime;
      undo_save(zFile);
      if( file_wd_size(zFull)>=0 && (isLink || file_wd_islink(0)) ){
        file_delete(zFull);
      }
      if( isLink ){
        symlink_create(blob_str(&record), zFull);
      }else{
        blob_write_to_file(&record, zFull);
      }
      file_wd_setexe(zFull, isExe);
      fossil_print("REVERTED: %s\n", zFile);
      mtime = file_wd_mtime(zFull);
      db_multi_exec(
         "UPDATE vfile"
         "   SET mtime=%lld, chnged=0, deleted=0, isexe=%d, islink=%d,mrid=rid"
         " WHERE pathname=%Q OR origname=%Q",
         mtime, isExe, isLink, zFile, zFile
      );







|



|




















|







783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
    zFile = db_column_text(&q, 0);
    zFull = mprintf("%/%/", g.zLocalRoot, zFile);
    errCode = historical_version_of_file(zRevision, zFile, &record,
                                         &isLink, &isExe, 0, 2);
    if( errCode==2 ){
      if( db_int(0, "SELECT rid FROM vfile WHERE pathname=%Q OR origname=%Q",
                 zFile, zFile)==0 ){
        fossil_print("UNMANAGE %s\n", zFile);
      }else{
        undo_save(zFile);
        file_delete(zFull);
        fossil_print("DELETE   %s\n", zFile);
      }
      db_multi_exec(
        "UPDATE OR REPLACE vfile"
        "   SET pathname=origname, origname=NULL"
        " WHERE pathname=%Q AND origname!=pathname;"
        "DELETE FROM vfile WHERE pathname=%Q",
        zFile, zFile
      );
    }else{
      sqlite3_int64 mtime;
      undo_save(zFile);
      if( file_wd_size(zFull)>=0 && (isLink || file_wd_islink(0)) ){
        file_delete(zFull);
      }
      if( isLink ){
        symlink_create(blob_str(&record), zFull);
      }else{
        blob_write_to_file(&record, zFull);
      }
      file_wd_setexe(zFull, isExe);
      fossil_print("REVERT   %s\n", zFile);
      mtime = file_wd_mtime(zFull);
      db_multi_exec(
         "UPDATE vfile"
         "   SET mtime=%lld, chnged=0, deleted=0, isexe=%d, islink=%d,mrid=rid"
         " WHERE pathname=%Q OR origname=%Q",
         mtime, isExe, isLink, zFile, zFile
      );
Changes to src/winfile.c.
238
239
240
241
242
243
244
245


246
247
248
249
250
251
252
                   &privSetSize, &grantedAccess, &accessYesNo) ){
    /*
     * Unable to perform access check.
     */

    rc = -1; goto done;
  }
  if( !accessYesNo ) rc = -1;



done:

  if( hToken != NULL ){
    CloseHandle(hToken);
  }
  if( impersonated ){







|
>
>







238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
                   &privSetSize, &grantedAccess, &accessYesNo) ){
    /*
     * Unable to perform access check.
     */

    rc = -1; goto done;
  }
  if( !accessYesNo ){
    rc = -1;
  }

done:

  if( hToken != NULL ){
    CloseHandle(hToken);
  }
  if( impersonated ){
Changes to src/xfersetup.c.
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
  }
  if( P("setup") ){
    cgi_redirect("xfersetup");
  }
  isSubmit = P("submit")!=0;
  z = P("x");
  if( z==0 ){
    z = db_get(zDbField, (char*)zDfltValue);
  }
  style_header("Edit %s", zTitle);
  if( P("clear")!=0 ){
    login_verify_csrf_secret();
    db_unset(zDbField, 0);
    if( xRebuild ) xRebuild();
    z = zDfltValue;







|







112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
  }
  if( P("setup") ){
    cgi_redirect("xfersetup");
  }
  isSubmit = P("submit")!=0;
  z = P("x");
  if( z==0 ){
    z = db_get(zDbField, zDfltValue);
  }
  style_header("Edit %s", zTitle);
  if( P("clear")!=0 ){
    login_verify_csrf_secret();
    db_unset(zDbField, 0);
    if( xRebuild ) xRebuild();
    z = zDfltValue;
Added test/amend.test.












































































































































































































































































































































































































































































































































































































































































































































































































>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
#
# Tests for the "amend" command.
#

proc short_uuid {uuid {len 10}} {
  string range $uuid 0 $len-1
}

proc artifact_from_timeline {res var} {
  upvar $var artid
  regexp {(?x)[0-9]{2}(?::[0-9]{2}){2}\s+\[([0-9a-f]+)]} $res m artid
}

proc manifest_comment {comment} {
  string map [list { } {\\s} \n {\\n} \r {\\r}] $comment
}

proc uuid_from_commit {res var} {
  upvar $var UUID
  regexp {^New_Version: ([0-9a-f]{40})$} $res m UUID
}

proc uuid_from_branch {res var} {
  upvar $var UUID
  regexp {^New branch: ([0-9a-f]{40})$} $res m UUID
}

proc uuid_from_checkout {var} {
  global RESULT
  upvar $var UUID
  fossil status
  regexp {checkout:\s+([0-9a-f]{40})} $RESULT m UUID
}

# Make sure we are not in an open repository and initialize new repository
repo_init

########################################
# Setup: Add file and commit           #
########################################

if {![uuid_from_checkout UUIDINIT]} {
  test amend-checkout-failure false
  return
}
write_file datafile "data"
fossil add datafile
fossil commit -m "c1"
if {![uuid_from_commit $RESULT UUID]} {
  test amend-setup-failure false
  return
}

########################################
# Test: -branch                        #
########################################
set UUIDB UUIDB
write_file datafile "data.file"
fossil commit -m "c2"
if {![uuid_from_commit $RESULT UUIDB]} {
  test amend-branch.setup false
}
fossil amend $UUIDB -branch amended-branch
test amend-branch-1.1 {[regexp {tags:\s+amended-branch} $RESULT]}
fossil branch ls
test amend-branch-1.2 {[string first "* amended-branch" $RESULT] != -1}
fossil tag list
test amend-branch-1.3 {[string first amended-branch $RESULT] != -1}
fossil tag list --raw $UUIDB
test amend-branch-1.4 {[string first "branch=amended-branch" $RESULT] != -1}
test amend-branch-1.5 {[string first "sym-amended-branch" $RESULT] != -1}
fossil timeline -n 1
test amend-branch-1.6 {[string match {*Move*to*branch*amended-branch*} $RESULT]}

########################################
# Test: -bgcolor                       #
########################################
set tc 0
foreach {color result} {
  0 0
  a a
  abcdef #abcdef
  abc123 #abc123
  123efg 123efg
  abcdefg abcdefg
  abcdeg abcdeg
  blue blue
  acf #acf
  123 #123
  #1234 #1234
  1234 1234
  123456 #123456
} {
  incr tc
  fossil amend $UUID -bgcolor $color
  test amend-bgcolor-1.$tc.a {[string match "*uuid:*$UUID*" $RESULT]}
  fossil tag list --raw $UUID
  test amend-bgcolor-1.$tc.b {[string first "bgcolor=$result" $RESULT] != -1}
  fossil timeline -n 1
  test amend-bgcolor-1.$tc.c {
    [string match "*Change*background*color*to*\"$result\"*" $RESULT]
  }
  if {[artifact_from_timeline $RESULT artid]} {
    fossil artifact $artid
    test amend-bgcolor-1.$tc.d {
      [string match "*T +bgcolor $UUID $result*" $RESULT]
    }
  } else {
    if {$VERBOSE} { protOut "No artifact found in timeline output" }
    test amend-bgcolor-1.$tc.d false
  }
}
fossil amend $UUID -bgcolor {}
test amend-bgcolor-2.1 {[string match "*uuid:*$UUID*" $RESULT]}
fossil tag list --raw $UUID
test amend-bgcolor-2.2 {
  [string first "bgcolor=" $RESULT] == -1 &&
  [string first "bgcolor" $RESULT] != -1
}
fossil timeline -n 1
test amend-bgcolor-2.3 {[string match "*Cancel*background*color.*" $RESULT]}
if {[artifact_from_timeline $RESULT artid]} {
  fossil artifact $artid
  test amend-bgcolor-2.4 {[string match "*T -bgcolor $UUID*" $RESULT]}
} else {
  if {$VERBOSE} { protOut "No artifact found in timeline output" }
  test amend-bgcolor-2.4 false
}

########################################
# Test: -branchcolor                   #
########################################
set UUID2 UUID2
fossil branch new brclr $UUID
if {![uuid_from_branch $RESULT UUID2]} {
  test amend-branchcolor.setup false
}
fossil update $UUID2
fossil amend $UUID2 -branchcolor yellow
test amend-branchcolor-1.1 {[string match "*uuid:*$UUID2*" $RESULT]}
fossil tag ls --raw $UUID2
test amend-branchcolor-1.2 {[string first "bgcolor=yellow" $RESULT] != -1}
fossil timeline -n 1
test amend-branchcolor-1.3 {
  [string match {*Change*branch*background*color*to*"yellow".*} $RESULT]
}
if {[regexp {(?x)[0-9]{2}(?::[0-9]{2}){2}\s+\[([0-9a-f]+)]} $RESULT m artid]} {
  fossil artifact $artid
  test amend-branchcolor-1.4 {
    [string match "*T \*bgcolor $UUID2 yellow*" $RESULT]
  }
} else {
  if {$VERBOSE} { protOut "No artifact found in timeline output" }
  test amend-branchcolor-1.4 false
}

set UUIDN UUIDN
write_file datafile "brclr"
fossil commit -m "brclr"
if {![uuid_from_commit $RESULT UUIDN]} {
  test amend-branchcolor-propagating.setup false
}
write_file datafile "bc1"
fossil commit -m "mc1"
write_file datafile "bc2"
fossil commit -m "mc2"
fossil amend $UUIDN -branchcolor deadbe
test amend-branchcolor-2.1 {[string match "*uuid:*$UUIDN*" $RESULT]}
fossil tag ls --raw current
test amend-branchcolor-2.2 {[string first "bgcolor=#deadbe" $RESULT] != -1}
fossil timeline -n 1
test amend-branchcolor-2.3 {
  [string match {*Change*branch*background*color*to*"#deadbe".*} $RESULT]
}

########################################
# Test: -author                        #
########################################
fossil amend $UUID -author author-test
test amend-author-1.1 {[string match {*comment:*(user:*author-test)*} $RESULT]}
fossil tag ls --raw $UUID
test amend-author-1.2 {[string first "user=author-test" $RESULT] != -1}
fossil timeline -n 1
test amend-author-1.3 {[string match {*Change*user*to*"author-test".*} $RESULT]}

########################################
# Test: -date                          #
########################################
set timestamp [clock scan yesterday]
set date [clock format $timestamp -format "%Y-%m-%d" -gmt 1]
set time [clock format $timestamp -format "%H:%M:%S" -gmt 1]
set datetime "$date $time"
fossil amend $UUIDINIT -date $datetime
test amend-date-1.1 {[string match "*uuid:*$UUIDINIT*$datetime*" $RESULT]}
fossil tag ls --raw $UUIDINIT
test amend-date-1.2 {[string first "date=$datetime" $RESULT] != -1}
fossil timeline -n 1
test amend-date-1.3 {[string match "*Timestamp*$date*$time*" $RESULT]}
set badformats {
  "%+"
  "%Y-%m-%d %H:%M%:%S %Z"
  "%d/%m/%Y %H:%M%:%S %Z"
  "%d/%m/%Y %H:%M%:%S"
  "%d/%m/%Y"
}
set sc 0
foreach badformat $badformats {
  incr sc
  set datetime [clock format $timestamp -format $badformat -gmt 1]
  fossil amend $UUIDINIT -date $datetime
  test amend-date-2.$sc {[string first "YYYY-MM-DD HH:MM:SS" $RESULT] != -1}
}

########################################
# Test: -hide                          #
########################################
set UUIDH UUIDH
fossil revert
fossil update trunk
fossil branch new tohide current
if {![uuid_from_branch $RESULT UUIDH]} {
  test amend-hide-setup false
}
fossil amend $UUIDH -hide
test amend-hide-1.1 {[string match "*uuid:*$UUIDH*" $RESULT]}
fossil tag ls --raw $UUIDH
test amend-hide-1.2 {[string first "hidden" $RESULT] != -1}
fossil timeline -n 1
test amend-hide-1.3 {[string match {*Add*propagating*"hidden".*} $RESULT]}

########################################
# Test: -close                          #
########################################
set UUIDC UUIDC
fossil branch new cllf $UUID
if {![uuid_from_branch $RESULT UUIDC]} {
  test amend-close.setup false
}
fossil update $UUIDC
fossil amend $UUIDC -close
test amend-close-1.1.a {[string match "*uuid:*$UUIDC*" $RESULT]}
test amend-close-1.1.b {
  [string match "*comment:*Create*new*branch*named*\"cllf\"*" $RESULT]
}
fossil tag ls --raw $UUIDC
test amend-close-1.2 {[string first "closed" $RESULT] != -1}
fossil timeline -n 1
test amend-close-1.3 {[string match {*Marked*"Closed".*} $RESULT]}
write_file datafile "cllf"
fossil commit -m "should fail"
test amend-close-2 {[string first "closed leaf" $RESULT] != -1}

set UUID3 UUID3
fossil revert
fossil update trunk
write_file datafile "cb"
fossil commit -m "closed-branch" --branch "closebranch"
if {![uuid_from_commit $RESULT UUID3]} {
  test amend-close-3.setup false
}
write_file datafile "b1"
fossil commit -m "m1"
write_file datafile "b2"
fossil commit -m "m2"
fossil amend $UUID3 --close
test amend-close-3.1 {[string match "*uuid:*$UUID3*" $RESULT]}
fossil tag ls --raw current
test amend-close-3.2 {[string first "closed" $RESULT] != -1}
fossil timeline -n 1
test amend-close-3.3 {
  [string match "*Add*propagating*\"closed\".*" $RESULT]
}
write_file datafile "changed"
fossil commit -m "should fail"
test amend-close-3.4 {[string first "closed leaf" $RESULT] != -1}

########################################
# Test: -tag/-cancel                   #
########################################
set tagtests {
  tagged tagged
  {000000 lower Upper alpha 0alpha} {000000 0alpha Upper alpha lower}
}
set tc 0
foreach {tagt result} $tagtests {
  incr tc
  set tags {}
  set cancels {}
  set t1exp ""
  set t2exp "*"
  set t3exp "*"
  set t5exp "*"
  foreach tag $tagt { 
    lappend tags -tag $tag
    lappend cancels -cancel $tag
  }
  foreach res $result {
    append t1exp ", $res"
    append t2exp "sym-$res*"
    append t3exp "Add*tag*\"$res\".*"
    append t5exp "Cancel*tag*\"$res\".*"
  }
  eval fossil amend $UUID $tags
  test amend-tag-$tc.1 {[string match "*uuid:*$UUID*tags:*$t1exp*" $RESULT]}
  fossil tag ls --raw $UUID
  test amend-tag-$tc.2 {[string match $t2exp $RESULT]}
  fossil timeline -n 1
  test amend-tag-$tc.3 {[string match $t3exp $RESULT]}
  eval fossil amend $UUID $cancels
  test amend-tag-$tc.4 {![string match "*tags:*$t1exp*" $RESULT]}
  fossil timeline -n 1
  test amend-tag-$tc.5 {[string match $t5exp $RESULT]}
}

########################################
# Test: -comment                       #
########################################
proc prep-test {comment content} {
  global UUID RESULT

  fossil revert
  fossil update trunk
  write_file datafile $comment
  fossil commit -m $content
  if {![uuid_from_commit $RESULT UUID]} {
    set UUID ""
  }
}

proc test-comment {name UUID comment} {
  global VERBOSE RESULT

  test amend-comment-$name.1 {
    [string match "*uuid:*$UUID*comment:*$comment*" $RESULT]
  }
  fossil timeline -n 1
  if {[artifact_from_timeline $RESULT artid]} {
    fossil artifact $artid
    test amend-comment-$name.2 {
      [string match "*T +comment $UUID *[manifest_comment $comment]*" $RESULT]
    }
  } else {
    if {$VERBOSE} { protOut "No artifact found in timeline output: $RESULT" }
    test amend-comment-$name.2 false
  }
  fossil timeline -n 1
  test amend-comment-$name.3 {
    [string match "*[short_uuid $UUID]*Edit*check-in*comment.*" $RESULT]
  }
  fossil info $UUID
  test amend-comment-$name.4 {
    [string match "*uuid:*$UUID*comment:*$comment*" $RESULT]
  }
}

prep-test "revision 1" "revision 1"
fossil amend $UUID -comment "revised revision 1"
test-comment 1 $UUID "revised revision 1"

prep-test "revision 2" "revision 2"
fossil amend $UUID -m "revised revision 2 with -m"
test-comment 2 $UUID "revised revision 2 with -m"

prep-test "revision 3" "revision 3"
write_file commitmsg "revision 3 revised"
fossil amend $UUID -message-file commitmsg
test-comment 3 $UUID "revision 3 revised"

prep-test "revision 4" "revision 4"
write_file commitmsg "revision 4 revised with -M"
fossil amend $UUID -M commitmsg
test-comment 4 $UUID "revision 4 revised with -M"

prep-test "final comment" "final content"
if {[catch {exec which ed} result]} {
  if {$VERBOSE} { protOut "Install ed for interactive comment test: $result" }
  test-comment 5 $UUID "ed required for interactive edit"
} else {
  set env(EDITOR) "ed -s"
  set comment "interactive edited comment"
  fossil_maybe_answer "a\n$comment\n.\nw\nq\n" amend $UUID --edit-comment
  unset env(EDITOR)
  test-comment 5 $UUID $comment
}

########################################
# Test: NULL UUID                      #
########################################
fossil amend {} -close
test amend-null-uuid {$CODE && [string first "no such check-in" $RESULT] != -1}
Changes to test/merge5.test.
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
    test merge5-$testid 0
  } else {
    test merge5-$testid 1
  }    
}

catch {exec $::fossilexe info} res
puts res=$res
if {![regexp {use --repository} $res]} {
  puts stderr "Cannot run this test within an open checkout"
  return
}
#
# Fossil will write data on $HOME, running 'fossil open' here.
# We need not to clutter the $HOME of the test caller.







<







35
36
37
38
39
40
41

42
43
44
45
46
47
48
    test merge5-$testid 0
  } else {
    test merge5-$testid 1
  }    
}

catch {exec $::fossilexe info} res

if {![regexp {use --repository} $res]} {
  puts stderr "Cannot run this test within an open checkout"
  return
}
#
# Fossil will write data on $HOME, running 'fossil open' here.
# We need not to clutter the $HOME of the test caller.
Changes to test/merge_renames.test.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
#
# Tests for merging with renames
#
#

catch {exec $::fossilexe info} res
puts res=$res
if {![regexp {use --repository} $res]} {
  puts stderr "Cannot run this test within an open checkout"
  return
}

######################################
#  Test 1                            #






<







1
2
3
4
5
6

7
8
9
10
11
12
13
#
# Tests for merging with renames
#
#

catch {exec $::fossilexe info} res

if {![regexp {use --repository} $res]} {
  puts stderr "Cannot run this test within an open checkout"
  return
}

######################################
#  Test 1                            #
Changes to test/mv-rm.test.
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
#
############################################################################
#
# MV / RM Commands
#

catch {exec $::fossilexe info} res
puts res=$res
if {![regexp {use --repository} $res]} {
  puts stderr "Cannot run this test within an open checkout"
  return
}

########################################
# Setup: Add Files and Commit          #







<







15
16
17
18
19
20
21

22
23
24
25
26
27
28
#
############################################################################
#
# MV / RM Commands
#

catch {exec $::fossilexe info} res

if {![regexp {use --repository} $res]} {
  puts stderr "Cannot run this test within an open checkout"
  return
}

########################################
# Setup: Add Files and Commit          #
Changes to test/revert.test.
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
    test revert-$testid$key $passed
  }
  
  fossil undo
}

catch {exec $::fossilexe info} res
puts res=$res
if {![regexp {use --repository} $res]} {
  puts stderr "Cannot run this test within an open checkout"
  return
}

repo_init








<







36
37
38
39
40
41
42

43
44
45
46
47
48
49
    test revert-$testid$key $passed
  }
  
  fossil undo
}

catch {exec $::fossilexe info} res

if {![regexp {use --repository} $res]} {
  puts stderr "Cannot run this test within an open checkout"
  return
}

repo_init

Added test/th1-repo.test.




















































































































































































>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
#
# Copyright (c) 2011 D. Richard Hipp
# Copyright (c) 2015 Ch. Drexler
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the Simplified BSD License (also
# known as the "2-Clause License" or "FreeBSD License".)
#
# This program is distributed in the hope that it will be useful,
# but without any warranty; without even the implied warranty of
# merchantability or fitness for a particular purpose.
#
# Author contact information:
#   drh@hwaci.com
#   http://www.hwaci.com/drh/
#
#   Chris Drexler <ckolumbus@ac-drexler.de>
#
############################################################################
#
# TH1 tests that may modify the repository
#

catch {exec $::fossilexe info} res
if {![regexp {use --repository} $res]} {
  puts stderr "Cannot run this test within an open checkout"
  return
}

########################################
# Setup: Add Files and Commit          #
########################################

set rootDir [file normalize [pwd]]

repo_init

write_file f1.md  "f1"
write_file f2.md  "f2"
write_file f3.txt "f3"
write_file f4.md  "f4"

file mkdir [file join $rootDir subdirA]
# NOTE: There are no files in subdirA.

file mkdir [file join $rootDir subdirB]
write_file [file join $rootDir subdirB f5.md] "f5"
write_file [file join $rootDir subdirB f6.md] "f6"
write_file [file join $rootDir subdirB f7.txt] "f7"
write_file [file join $rootDir subdirB f8.md] "f8"
write_file [file join $rootDir subdirB f9.wiki] "f9"

file mkdir [file join $rootDir subdirC]
write_file [file join $rootDir subdirC f10.md] "f10"
write_file [file join $rootDir subdirC f11t.xt] "f11"

set files_md [list subdirB/f5.md subdirB/f6.md subdirB/f8.md subdirC/f10.md]

fossil add $rootDir
fossil commit -m "c1"

set dir [file dirname [info script]]

###############################################################################

fossil test-th-eval --open-config "dir trunk subdir*/*.md"
test th1-dir-1 {[llength $RESULT] eq [llength $files_md]}

set n 1
foreach i $RESULT j $files_md {
   test th1-dir-2.$n {$i eq $j}
   set n [expr {$n + 1}]
}

###############################################################################

set dateTime {\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}}
fossil test-th-eval --open-config "dir trunk subdir*/*.md 1"
test th1-dir-3.1 {[lindex [lindex $RESULT 0] 0] eq "subdirB/f5.md"}
test th1-dir-3.2 {[lindex [lindex $RESULT 0] 1] == 2}
test th1-dir-3.3 {[regexp -- $dateTime [lindex [lindex $RESULT 0] 2]]}
test th1-dir-3.4 {[lindex [lindex $RESULT 1] 0] eq "subdirB/f6.md"}
test th1-dir-3.5 {[lindex [lindex $RESULT 1] 1] == 2}
test th1-dir-3.6 {[regexp -- $dateTime [lindex [lindex $RESULT 1] 2]]}
test th1-dir-3.7 {[lindex [lindex $RESULT 2] 0] eq "subdirB/f8.md"}
test th1-dir-3.8 {[lindex [lindex $RESULT 2] 1] == 2}
test th1-dir-3.9 {[regexp -- $dateTime [lindex [lindex $RESULT 2] 2]]}
test th1-dir-3.10 {[lindex [lindex $RESULT 3] 0] eq "subdirC/f10.md"}
test th1-dir-3.11 {[lindex [lindex $RESULT 3] 1] == 3}
test th1-dir-3.12 {[regexp -- $dateTime [lindex [lindex $RESULT 3] 2]]}
Changes to test/th1.test.
14
15
16
17
18
19
20




21
22
23
24
25
26
27
#   http://www.hwaci.com/drh/
#
############################################################################
#
# TH1 Commands
#





fossil test-th-eval --open-config "setting th1-hooks"
set th1Hooks [expr {$RESULT eq "1"}]

###############################################################################

fossil test-th-eval --open-config "setting abc"
test th1-setting-1 {$RESULT eq ""}







>
>
>
>







14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
#   http://www.hwaci.com/drh/
#
############################################################################
#
# TH1 Commands
#

set dir [file dirname [info script]]

###############################################################################

fossil test-th-eval --open-config "setting th1-hooks"
set th1Hooks [expr {$RESULT eq "1"}]

###############################################################################

fossil test-th-eval --open-config "setting abc"
test th1-setting-1 {$RESULT eq ""}
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872

#
# NOTE: This test may fail if the command names do not always come
#       out in a deterministic order from TH1.
#
fossil test-th-eval "info commands"
test th1-info-commands-1 {$RESULT eq {linecount htmlize date stime\
enable_output uplevel http expr glob_match utime styleFooter catch if\
tclReady searchable reinitialize combobox lindex query html anoncap randhex\
llength for set break regexp styleHeader puts return checkout decorate\
artifact trace wiki proc hascap globalState continue getParameter\
hasfeature setting lsearch breakpoint upvar render repository string unset\
setParameter list error info rename anycap httpize}}

###############################################################################

fossil test-th-eval "info vars"
test th1-info-vars-1 {$RESULT eq ""}







|
|
|
|







859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876

#
# NOTE: This test may fail if the command names do not always come
#       out in a deterministic order from TH1.
#
fossil test-th-eval "info commands"
test th1-info-commands-1 {$RESULT eq {linecount htmlize date stime\
enable_output uplevel dir http expr glob_match utime styleFooter encode64\
catch if tclReady searchable reinitialize combobox lindex query html anoncap\
randhex llength for set break regexp markdown styleHeader puts return checkout\
decorate artifact trace wiki proc hascap globalState continue getParameter\
hasfeature setting lsearch breakpoint upvar render repository string unset\
setParameter list error info rename anycap httpize}}

###############################################################################

fossil test-th-eval "info vars"
test th1-info-vars-1 {$RESULT eq ""}
1010
1011
1012
1013
1014
1015
1016



























































































































































































































































fossil test-th-eval {list [glob_match a?c abc] [glob_match abc a?c]}
test th1-glob-match-12 {$RESULT eq "1 0"}

###############################################################################

fossil test-th-eval {list [glob_match {a[bd]c} abc] [glob_match abc {a[bd]c}]}
test th1-glob-match-13 {$RESULT eq "1 0"}


































































































































































































































































>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
fossil test-th-eval {list [glob_match a?c abc] [glob_match abc a?c]}
test th1-glob-match-12 {$RESULT eq "1 0"}

###############################################################################

fossil test-th-eval {list [glob_match {a[bd]c} abc] [glob_match abc {a[bd]c}]}
test th1-glob-match-13 {$RESULT eq "1 0"}

###############################################################################

fossil test-th-eval {string is}
test th1-string-is-1 {$RESULT eq \
{TH_ERROR: wrong # args: should be "string is class string"}}

###############################################################################

fossil test-th-eval {string is something}
test th1-string-is-2 {$RESULT eq \
{TH_ERROR: wrong # args: should be "string is class string"}}

###############################################################################

fossil test-th-eval {string is not something else}
test th1-string-is-3 {$RESULT eq \
{TH_ERROR: wrong # args: should be "string is class string"}}

###############################################################################

fossil test-th-eval {string is other 123}
test th1-string-is-4 {$RESULT eq \
"TH_ERROR: Expected alnum, double, integer, or list, got: other"}

###############################################################################

fossil test-th-eval {string is alnum 123}
test th1-string-is-5 {$RESULT eq "1"}

###############################################################################

fossil test-th-eval {string is alnum abc}
test th1-string-is-6 {$RESULT eq "1"}

###############################################################################

fossil test-th-eval {string is alnum 123abc}
test th1-string-is-7 {$RESULT eq "1"}

###############################################################################

fossil test-th-eval {string is alnum abc123}
test th1-string-is-8 {$RESULT eq "1"}

###############################################################################

fossil test-th-eval {string is alnum _abc123}
test th1-string-is-9 {$RESULT eq "0"}

###############################################################################

fossil test-th-eval {string is alnum abc.123}
test th1-string-is-10 {$RESULT eq "0"}

###############################################################################

fossil test-th-eval {string is alnum abc123_}
test th1-string-is-11 {$RESULT eq "0"}

###############################################################################

fossil test-th-eval {string is list ""}
test th1-string-is-12 {$RESULT eq "1"}

###############################################################################

fossil test-th-eval {string is list 1}
test th1-string-is-13 {$RESULT eq "1"}

###############################################################################

fossil test-th-eval {string is list "1 2 3"}
test th1-string-is-14 {$RESULT eq "1"}

###############################################################################

fossil test-th-eval {string is list "\{"}
test th1-string-is-15 {$RESULT eq "0"}

###############################################################################

fossil test-th-eval {string is list "1 2 3 \{"}
test th1-string-is-16 {$RESULT eq "0"}

###############################################################################

fossil test-th-eval {string is list "1 2 3 \{\}"}
test th1-string-is-17 {$RESULT eq "1"}

###############################################################################

fossil test-th-eval {string is list "1 2 3 \{\{\}"}
test th1-string-is-18 {$RESULT eq "0"}

###############################################################################

fossil test-th-eval {string is double 123}
test th1-string-is-19 {$RESULT eq "1"}

###############################################################################

fossil test-th-eval {string is double 123.456}
test th1-string-is-20 {$RESULT eq "1"}

###############################################################################

fossil test-th-eval {string is double 123abc}
test th1-string-is-21 {$RESULT eq "0"}

###############################################################################

fossil test-th-eval {string is double 123_456}
test th1-string-is-22 {$RESULT eq "0"}

###############################################################################

fossil test-th-eval {string is integer 123}
test th1-string-is-23 {$RESULT eq "1"}

###############################################################################

fossil test-th-eval {string is integer 123.456}
test th1-string-is-24 {$RESULT eq "0"}

###############################################################################

fossil test-th-eval {string is integer 123abc}
test th1-string-is-25 {$RESULT eq "0"}

###############################################################################

fossil test-th-eval {string is integer 0b11001001}
test th1-string-is-26 {$RESULT eq "1"}

###############################################################################

fossil test-th-eval {string is integer 0b11001002}
test th1-string-is-27 {$RESULT eq "0"}

###############################################################################

fossil test-th-eval {string is integer 0o777}
test th1-string-is-28 {$RESULT eq "1"}

###############################################################################

fossil test-th-eval {string is integer 0o778}
test th1-string-is-29 {$RESULT eq "0"}

###############################################################################

fossil test-th-eval {string is integer 0xC0DEF00D}
test th1-string-is-30 {$RESULT eq "1"}

###############################################################################

fossil test-th-eval {string is integer 0xC0DEF00Z}
test th1-string-is-31 {$RESULT eq "0"}

###############################################################################

fossil test-th-eval {markdown}
test th1-markdown-1 {$RESULT eq \
{TH_ERROR: wrong # args: should be "markdown STRING"}}

###############################################################################

fossil test-th-eval {markdown one two}
test th1-markdown-2 {$RESULT eq \
{TH_ERROR: wrong # args: should be "markdown STRING"}}

###############################################################################

fossil test-th-eval {markdown "*This is a test.*"}
test th1-markdown-3 {[normalize_result] eq {{} {<div class="markdown">

<p><em>This is a test.</em></p>

</div>
}}}

###############################################################################

fossil test-th-eval {markdown "Test1\n=====\n*This is a test.*"}
test th1-markdown-4 {[normalize_result] eq {Test1 {<div class="markdown">

<h1>Test1</h1>
<p><em>This is a test.</em></p>

</div>
}}}

###############################################################################

set markdown [read_file [file join $dir markdown-test1.md]]
fossil test-th-eval [string map \
    [list %markdown% $markdown] {markdown {%markdown%}}]
test th1-markdown-5 {[normalize_result] eq \
{{Markdown Formatter Test Document} {<div class="markdown">

<h1>Markdown Formatter Test Document</h1>
<p>This document is designed to test the markdown formatter.</p>

<ul>
<li>A bullet item.

<ul>
<li>A subitem</li>
</ul></li>
<li>Second bullet</li>
</ul>

<p>More text</p>

<ol>
<li>Enumeration
1.1.  Subitem 1
1.2.  Subitem 2</li>
<li>Second enumeration.</li>
</ol>

<p>Another paragraph.</p>

<h2>Other Features</h2>
<p>Text can show <em>emphasis</em> or <em>emphasis</em> or <strong>strong emphassis</strong>.</p>

</div>
}}}

###############################################################################

fossil test-th-eval {encode64 test}
test th1-encode64-1 {$RESULT eq "dGVzdA=="}

###############################################################################

fossil test-th-eval {encode64 test\x00}
test th1-encode64-2 {$RESULT eq "dGVzdAA="}

###############################################################################

#
# TODO: Modify the result of this test if the source file (i.e.
#       "ajax/cgi-bin/fossil-json.cgi.example") changes.
#
fossil test-th-eval --open-config \
    {encode64 [artifact trunk ajax/cgi-bin/fossil-json.cgi.example]}

test th1-encode64-3 {$RESULT eq \
"IyEvcGF0aC90by9mb3NzaWwvYmluYXJ5CnJlcG9zaXRvcnk6IC9wYXRoL3RvL3JlcG8uZnNsCg=="}
Changes to tools/cvs2fossil/changeset.
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# individuals.  For exact contribution history, see the revision
# history and logs, available at http://fossil-scm.hwaci.com/fossil
# # ## ### ##### ######## ############# #####################

## Helper application, debugging of cvs2fossil. This application
## extracts all information about a changeset and writes it nicely
## formatted to stdout. The changeset is specified by its internal
## numerical id. 

# # ## ### ##### ######## ############# #####################
## Requirements, extended package management for local packages.

lappend auto_path [file join [file dirname [info script]] lib]

package require Tcl 8.4                               ; # Required runtime.







|







12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# individuals.  For exact contribution history, see the revision
# history and logs, available at http://fossil-scm.hwaci.com/fossil
# # ## ### ##### ######## ############# #####################

## Helper application, debugging of cvs2fossil. This application
## extracts all information about a changeset and writes it nicely
## formatted to stdout. The changeset is specified by its internal
## numerical id.

# # ## ### ##### ######## ############# #####################
## Requirements, extended package management for local packages.

lappend auto_path [file join [file dirname [info script]] lib]

package require Tcl 8.4                               ; # Required runtime.
Changes to tools/cvs2fossil/lib/c2f_pbreakacycle.tcl.
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282

	foreach item [array names limits] {
	    set mins $minsa($item)
	    set maxp $maxp($item)
	    # Note that for the min successor position "" represents
	    # +infinity
	    integrity assert {
		($mins eq "") || ($maxp < $mins) 
	    } {Item <$item> is backward at file level ($maxp >= $mins)}
	}

	# Save the limits for the splitter, and compute the border at
	# which to split as the minimum of all minimal successor
	# positions.








|







268
269
270
271
272
273
274
275
276
277
278
279
280
281
282

	foreach item [array names limits] {
	    set mins $minsa($item)
	    set maxp $maxp($item)
	    # Note that for the min successor position "" represents
	    # +infinity
	    integrity assert {
		($mins eq "") || ($maxp < $mins)
	    } {Item <$item> is backward at file level ($maxp >= $mins)}
	}

	# Save the limits for the splitter, and compute the border at
	# which to split as the minimum of all minimal successor
	# positions.

Changes to tools/cvs2fossil/lib/c2f_prev.tcl.
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191

    # List of all known changesets of a type.
    typevariable mytchangesets -array {
	sym::branch {}
	sym::tag    {}
	rev         {}
    }
					
    typevariable myitemmap     -array {} ; # Map from items (tagged)
					   # to the list of changesets
					   # containing it. Each item
					   # can be used by only one
					   # changeset.
    typevariable myidmap   -array {} ; # Map from changeset id to
				       # changeset.







|







1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191

    # List of all known changesets of a type.
    typevariable mytchangesets -array {
	sym::branch {}
	sym::tag    {}
	rev         {}
    }

    typevariable myitemmap     -array {} ; # Map from items (tagged)
					   # to the list of changesets
					   # containing it. Each item
					   # can be used by only one
					   # changeset.
    typevariable myidmap   -array {} ; # Map from changeset id to
				       # changeset.
Changes to tools/cvs2fossil/lib/mem.tcl.
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61

	variable lcba
	variable lmba
	variable mid

	struct::list assign [minfo] _ _ _ cba _ mba

	set dc [expr $cba - $lcba] ; set lcba $cba	
	set dm [expr $mba - $lmba] ; set lmba $mba	

	# projection: 1          2 3          4 5         6 7          6 8         10
	return "[F [incr mid]] | [F $cba] | [F $dc] | [F $mba] | [F $dm] |=| "
    }

    proc mark {} {
	variable track ; if {!$track} return







|
|







46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61

	variable lcba
	variable lmba
	variable mid

	struct::list assign [minfo] _ _ _ cba _ mba

	set dc [expr $cba - $lcba] ; set lcba $cba
	set dm [expr $mba - $lmba] ; set lmba $mba

	# projection: 1          2 3          4 5         6 7          6 8         10
	return "[F [incr mid]] | [F $cba] | [F $dc] | [F $mba] | [F $dm] |=| "
    }

    proc mark {} {
	variable track ; if {!$track} return
Changes to tools/fossilwiki.
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
	while ( $text =~ m/\[([^][]+)\]/g )
	{
		push @links,$1;
	}

	$numlinks = $#links;

	if (@links == ()) 
	{
		push @terminals, $page;
	}
	else
	{
		my @internals = grep { $_ !~ /(http:)|(mailto:)|(https:)/ } @links;
		if (@internals == ()) 
		{
			push @nointernals, $page;
		}
		else
		{
			@{$links{$page}{'links'}} = map {my ($a,$b) = split /\|/; $a;} @internals;
			foreach $internal ( @internals )







|






|







45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
	while ( $text =~ m/\[([^][]+)\]/g )
	{
		push @links,$1;
	}

	$numlinks = $#links;

	if (@links == ())
	{
		push @terminals, $page;
	}
	else
	{
		my @internals = grep { $_ !~ /(http:)|(mailto:)|(https:)/ } @links;
		if (@internals == ())
		{
			push @nointernals, $page;
		}
		else
		{
			@{$links{$page}{'links'}} = map {my ($a,$b) = split /\|/; $a;} @internals;
			foreach $internal ( @internals )
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
foreach $link ( keys %badlinks )
{
	print ("badlink: '$link'\n");
}
foreach $page ( sort keys %links )
{
	my @links = @{$links{$page}{'links'}};
	if (@links != ()) 
	{
		if ($page eq $mainpage)
		{
			print "links: *** '$page' *** -> ", join (", ", @links), "\n";
		}
		else
		{
			print "links: '$page' -> ", join (", ", @links), "\n";
		}
	}
}







|











114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
foreach $link ( keys %badlinks )
{
	print ("badlink: '$link'\n");
}
foreach $page ( sort keys %links )
{
	my @links = @{$links{$page}{'links'}};
	if (@links != ())
	{
		if ($page eq $mainpage)
		{
			print "links: *** '$page' *** -> ", join (", ", @links), "\n";
		}
		else
		{
			print "links: '$page' -> ", join (", ", @links), "\n";
		}
	}
}
Changes to win/Makefile.PellesCGMake.
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
UTILS_OBJ=$(UTILS:.exe=.obj)
UTILS_SRC=$(foreach uf,$(UTILS),$(SRCDIR)$(uf:.exe=.c))

# define the SQLite files, which need special flags on compile
SQLITESRC=sqlite3.c
ORIGSQLITESRC=$(foreach sf,$(SQLITESRC),$(SRCDIR)$(sf))
SQLITEOBJ=$(foreach sf,$(SQLITESRC),$(sf:.c=.obj))
SQLITEDEFINES=-DNDEBUG=1 -DSQLITE_OMIT_LOAD_EXTENSION=1 -DSQLITE_ENABLE_LOCKING_STYLE=0 -DSQLITE_THREADSAFE=0 -DSQLITE_DEFAULT_FILE_FORMAT=4 -DSQLITE_OMIT_DEPRECATED -DSQLITE_ENABLE_EXPLAIN_COMMENTS -DSQLITE_ENABLE_FTS4 -DSQLITE_ENABLE_FTS3_PARENTHESIS -DSQLITE_ENABLE_DBSTAT_VTAB -DSQLITE_WIN32_NO_ANSI

# define the SQLite shell files, which need special flags on compile
SQLITESHELLSRC=shell.c
ORIGSQLITESHELLSRC=$(foreach sf,$(SQLITESHELLSRC),$(SRCDIR)$(sf))
SQLITESHELLOBJ=$(foreach sf,$(SQLITESHELLSRC),$(sf:.c=.obj))
SQLITESHELLDEFINES=-Dmain=sqlite3_shell -DSQLITE_OMIT_LOAD_EXTENSION=1 -DUSE_SYSTEM_SQLITE=$(USE_SYSTEM_SQLITE) -DSQLITE_SHELL_DBNAME_PROC=fossil_open -Daccess=file_access -Dsystem=fossil_system -Dgetenv=fossil_getenv -Dfopen=fossil_fopen








|







81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
UTILS_OBJ=$(UTILS:.exe=.obj)
UTILS_SRC=$(foreach uf,$(UTILS),$(SRCDIR)$(uf:.exe=.c))

# define the SQLite files, which need special flags on compile
SQLITESRC=sqlite3.c
ORIGSQLITESRC=$(foreach sf,$(SQLITESRC),$(SRCDIR)$(sf))
SQLITEOBJ=$(foreach sf,$(SQLITESRC),$(sf:.c=.obj))
SQLITEDEFINES=-DNDEBUG=1 -DSQLITE_OMIT_LOAD_EXTENSION=1 -DSQLITE_ENABLE_LOCKING_STYLE=0 -DSQLITE_THREADSAFE=0 -DSQLITE_DEFAULT_FILE_FORMAT=4 -DSQLITE_OMIT_DEPRECATED -DSQLITE_ENABLE_EXPLAIN_COMMENTS -DSQLITE_ENABLE_FTS4 -DSQLITE_ENABLE_FTS3_PARENTHESIS -DSQLITE_ENABLE_DBSTAT_VTAB -DSQLITE_ENABLE_JSON1 -DSQLITE_ENABLE_FTS5 -DSQLITE_WIN32_NO_ANSI

# define the SQLite shell files, which need special flags on compile
SQLITESHELLSRC=shell.c
ORIGSQLITESHELLSRC=$(foreach sf,$(SQLITESHELLSRC),$(SRCDIR)$(sf))
SQLITESHELLOBJ=$(foreach sf,$(SQLITESHELLSRC),$(sf:.c=.obj))
SQLITESHELLDEFINES=-Dmain=sqlite3_shell -DSQLITE_OMIT_LOAD_EXTENSION=1 -DUSE_SYSTEM_SQLITE=$(USE_SYSTEM_SQLITE) -DSQLITE_SHELL_DBNAME_PROC=fossil_open -Daccess=file_access -Dsystem=fossil_system -Dgetenv=fossil_getenv -Dfopen=fossil_fopen

Changes to win/Makefile.dmc.
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
SSL    =

CFLAGS = -o
BCC    = $(DMDIR)\bin\dmc $(CFLAGS)
TCC    = $(DMDIR)\bin\dmc $(CFLAGS) $(DMCDEF) $(SSL) $(INCL)
LIBS   = $(DMDIR)\extra\lib\ zlib wsock32 advapi32

SQLITE_OPTIONS = -DNDEBUG=1 -DSQLITE_OMIT_LOAD_EXTENSION=1 -DSQLITE_ENABLE_LOCKING_STYLE=0 -DSQLITE_THREADSAFE=0 -DSQLITE_DEFAULT_FILE_FORMAT=4 -DSQLITE_OMIT_DEPRECATED -DSQLITE_ENABLE_EXPLAIN_COMMENTS -DSQLITE_ENABLE_FTS4 -DSQLITE_ENABLE_FTS3_PARENTHESIS -DSQLITE_ENABLE_DBSTAT_VTAB

SHELL_OPTIONS = -Dmain=sqlite3_shell -DSQLITE_OMIT_LOAD_EXTENSION=1 -DUSE_SYSTEM_SQLITE=$(USE_SYSTEM_SQLITE) -DSQLITE_SHELL_DBNAME_PROC=fossil_open -Daccess=file_access -Dsystem=fossil_system -Dgetenv=fossil_getenv -Dfopen=fossil_fopen

SRC   = add_.c allrepo_.c attach_.c bag_.c bisect_.c blob_.c branch_.c browse_.c builtin_.c bundle_.c cache_.c captcha_.c cgi_.c checkin_.c checkout_.c clearsign_.c clone_.c comformat_.c configure_.c content_.c db_.c delta_.c deltacmd_.c descendants_.c diff_.c diffcmd_.c doc_.c encode_.c event_.c export_.c file_.c finfo_.c foci_.c fusefs_.c glob_.c graph_.c gzip_.c http_.c http_socket_.c http_ssl_.c http_transport_.c import_.c info_.c json_.c json_artifact_.c json_branch_.c json_config_.c json_diff_.c json_dir_.c json_finfo_.c json_login_.c json_query_.c json_report_.c json_status_.c json_tag_.c json_timeline_.c json_user_.c json_wiki_.c leaf_.c loadctrl_.c login_.c lookslike_.c main_.c manifest_.c markdown_.c markdown_html_.c md5_.c merge_.c merge3_.c moderate_.c name_.c path_.c piechart_.c pivot_.c popen_.c pqueue_.c printf_.c publish_.c purge_.c rebuild_.c regexp_.c report_.c rss_.c schema_.c search_.c setup_.c sha1_.c shun_.c sitemap_.c skins_.c sqlcmd_.c stash_.c stat_.c statrep_.c style_.c sync_.c tag_.c tar_.c th_main_.c timeline_.c tkt_.c tktsetup_.c undo_.c unicode_.c update_.c url_.c user_.c utf8_.c util_.c verify_.c vfile_.c wiki_.c wikiformat_.c winfile_.c winhttp_.c wysiwyg_.c xfer_.c xfersetup_.c zip_.c 

OBJ   = $(OBJDIR)\add$O $(OBJDIR)\allrepo$O $(OBJDIR)\attach$O $(OBJDIR)\bag$O $(OBJDIR)\bisect$O $(OBJDIR)\blob$O $(OBJDIR)\branch$O $(OBJDIR)\browse$O $(OBJDIR)\builtin$O $(OBJDIR)\bundle$O $(OBJDIR)\cache$O $(OBJDIR)\captcha$O $(OBJDIR)\cgi$O $(OBJDIR)\checkin$O $(OBJDIR)\checkout$O $(OBJDIR)\clearsign$O $(OBJDIR)\clone$O $(OBJDIR)\comformat$O $(OBJDIR)\configure$O $(OBJDIR)\content$O $(OBJDIR)\db$O $(OBJDIR)\delta$O $(OBJDIR)\deltacmd$O $(OBJDIR)\descendants$O $(OBJDIR)\diff$O $(OBJDIR)\diffcmd$O $(OBJDIR)\doc$O $(OBJDIR)\encode$O $(OBJDIR)\event$O $(OBJDIR)\export$O $(OBJDIR)\file$O $(OBJDIR)\finfo$O $(OBJDIR)\foci$O $(OBJDIR)\fusefs$O $(OBJDIR)\glob$O $(OBJDIR)\graph$O $(OBJDIR)\gzip$O $(OBJDIR)\http$O $(OBJDIR)\http_socket$O $(OBJDIR)\http_ssl$O $(OBJDIR)\http_transport$O $(OBJDIR)\import$O $(OBJDIR)\info$O $(OBJDIR)\json$O $(OBJDIR)\json_artifact$O $(OBJDIR)\json_branch$O $(OBJDIR)\json_config$O $(OBJDIR)\json_diff$O $(OBJDIR)\json_dir$O $(OBJDIR)\json_finfo$O $(OBJDIR)\json_login$O $(OBJDIR)\json_query$O $(OBJDIR)\json_report$O $(OBJDIR)\json_status$O $(OBJDIR)\json_tag$O $(OBJDIR)\json_timeline$O $(OBJDIR)\json_user$O $(OBJDIR)\json_wiki$O $(OBJDIR)\leaf$O $(OBJDIR)\loadctrl$O $(OBJDIR)\login$O $(OBJDIR)\lookslike$O $(OBJDIR)\main$O $(OBJDIR)\manifest$O $(OBJDIR)\markdown$O $(OBJDIR)\markdown_html$O $(OBJDIR)\md5$O $(OBJDIR)\merge$O $(OBJDIR)\merge3$O $(OBJDIR)\moderate$O $(OBJDIR)\name$O $(OBJDIR)\path$O $(OBJDIR)\piechart$O $(OBJDIR)\pivot$O $(OBJDIR)\popen$O $(OBJDIR)\pqueue$O $(OBJDIR)\printf$O $(OBJDIR)\publish$O $(OBJDIR)\purge$O $(OBJDIR)\rebuild$O $(OBJDIR)\regexp$O $(OBJDIR)\report$O $(OBJDIR)\rss$O $(OBJDIR)\schema$O $(OBJDIR)\search$O $(OBJDIR)\setup$O $(OBJDIR)\sha1$O $(OBJDIR)\shun$O $(OBJDIR)\sitemap$O $(OBJDIR)\skins$O $(OBJDIR)\sqlcmd$O $(OBJDIR)\stash$O $(OBJDIR)\stat$O $(OBJDIR)\statrep$O $(OBJDIR)\style$O $(OBJDIR)\sync$O $(OBJDIR)\tag$O $(OBJDIR)\tar$O $(OBJDIR)\th_main$O $(OBJDIR)\timeline$O $(OBJDIR)\tkt$O $(OBJDIR)\tktsetup$O $(OBJDIR)\undo$O $(OBJDIR)\unicode$O $(OBJDIR)\update$O $(OBJDIR)\url$O $(OBJDIR)\user$O $(OBJDIR)\utf8$O $(OBJDIR)\util$O $(OBJDIR)\verify$O $(OBJDIR)\vfile$O $(OBJDIR)\wiki$O $(OBJDIR)\wikiformat$O $(OBJDIR)\winfile$O $(OBJDIR)\winhttp$O $(OBJDIR)\wysiwyg$O $(OBJDIR)\xfer$O $(OBJDIR)\xfersetup$O $(OBJDIR)\zip$O $(OBJDIR)\shell$O $(OBJDIR)\sqlite3$O $(OBJDIR)\th$O $(OBJDIR)\th_lang$O 








|







22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
SSL    =

CFLAGS = -o
BCC    = $(DMDIR)\bin\dmc $(CFLAGS)
TCC    = $(DMDIR)\bin\dmc $(CFLAGS) $(DMCDEF) $(SSL) $(INCL)
LIBS   = $(DMDIR)\extra\lib\ zlib wsock32 advapi32

SQLITE_OPTIONS = -DNDEBUG=1 -DSQLITE_OMIT_LOAD_EXTENSION=1 -DSQLITE_ENABLE_LOCKING_STYLE=0 -DSQLITE_THREADSAFE=0 -DSQLITE_DEFAULT_FILE_FORMAT=4 -DSQLITE_OMIT_DEPRECATED -DSQLITE_ENABLE_EXPLAIN_COMMENTS -DSQLITE_ENABLE_FTS4 -DSQLITE_ENABLE_FTS3_PARENTHESIS -DSQLITE_ENABLE_DBSTAT_VTAB -DSQLITE_ENABLE_JSON1 -DSQLITE_ENABLE_FTS5

SHELL_OPTIONS = -Dmain=sqlite3_shell -DSQLITE_OMIT_LOAD_EXTENSION=1 -DUSE_SYSTEM_SQLITE=$(USE_SYSTEM_SQLITE) -DSQLITE_SHELL_DBNAME_PROC=fossil_open -Daccess=file_access -Dsystem=fossil_system -Dgetenv=fossil_getenv -Dfopen=fossil_fopen

SRC   = add_.c allrepo_.c attach_.c bag_.c bisect_.c blob_.c branch_.c browse_.c builtin_.c bundle_.c cache_.c captcha_.c cgi_.c checkin_.c checkout_.c clearsign_.c clone_.c comformat_.c configure_.c content_.c db_.c delta_.c deltacmd_.c descendants_.c diff_.c diffcmd_.c doc_.c encode_.c event_.c export_.c file_.c finfo_.c foci_.c fusefs_.c glob_.c graph_.c gzip_.c http_.c http_socket_.c http_ssl_.c http_transport_.c import_.c info_.c json_.c json_artifact_.c json_branch_.c json_config_.c json_diff_.c json_dir_.c json_finfo_.c json_login_.c json_query_.c json_report_.c json_status_.c json_tag_.c json_timeline_.c json_user_.c json_wiki_.c leaf_.c loadctrl_.c login_.c lookslike_.c main_.c manifest_.c markdown_.c markdown_html_.c md5_.c merge_.c merge3_.c moderate_.c name_.c path_.c piechart_.c pivot_.c popen_.c pqueue_.c printf_.c publish_.c purge_.c rebuild_.c regexp_.c report_.c rss_.c schema_.c search_.c setup_.c sha1_.c shun_.c sitemap_.c skins_.c sqlcmd_.c stash_.c stat_.c statrep_.c style_.c sync_.c tag_.c tar_.c th_main_.c timeline_.c tkt_.c tktsetup_.c undo_.c unicode_.c update_.c url_.c user_.c utf8_.c util_.c verify_.c vfile_.c wiki_.c wikiformat_.c winfile_.c winhttp_.c wysiwyg_.c xfer_.c xfersetup_.c zip_.c 

OBJ   = $(OBJDIR)\add$O $(OBJDIR)\allrepo$O $(OBJDIR)\attach$O $(OBJDIR)\bag$O $(OBJDIR)\bisect$O $(OBJDIR)\blob$O $(OBJDIR)\branch$O $(OBJDIR)\browse$O $(OBJDIR)\builtin$O $(OBJDIR)\bundle$O $(OBJDIR)\cache$O $(OBJDIR)\captcha$O $(OBJDIR)\cgi$O $(OBJDIR)\checkin$O $(OBJDIR)\checkout$O $(OBJDIR)\clearsign$O $(OBJDIR)\clone$O $(OBJDIR)\comformat$O $(OBJDIR)\configure$O $(OBJDIR)\content$O $(OBJDIR)\db$O $(OBJDIR)\delta$O $(OBJDIR)\deltacmd$O $(OBJDIR)\descendants$O $(OBJDIR)\diff$O $(OBJDIR)\diffcmd$O $(OBJDIR)\doc$O $(OBJDIR)\encode$O $(OBJDIR)\event$O $(OBJDIR)\export$O $(OBJDIR)\file$O $(OBJDIR)\finfo$O $(OBJDIR)\foci$O $(OBJDIR)\fusefs$O $(OBJDIR)\glob$O $(OBJDIR)\graph$O $(OBJDIR)\gzip$O $(OBJDIR)\http$O $(OBJDIR)\http_socket$O $(OBJDIR)\http_ssl$O $(OBJDIR)\http_transport$O $(OBJDIR)\import$O $(OBJDIR)\info$O $(OBJDIR)\json$O $(OBJDIR)\json_artifact$O $(OBJDIR)\json_branch$O $(OBJDIR)\json_config$O $(OBJDIR)\json_diff$O $(OBJDIR)\json_dir$O $(OBJDIR)\json_finfo$O $(OBJDIR)\json_login$O $(OBJDIR)\json_query$O $(OBJDIR)\json_report$O $(OBJDIR)\json_status$O $(OBJDIR)\json_tag$O $(OBJDIR)\json_timeline$O $(OBJDIR)\json_user$O $(OBJDIR)\json_wiki$O $(OBJDIR)\leaf$O $(OBJDIR)\loadctrl$O $(OBJDIR)\login$O $(OBJDIR)\lookslike$O $(OBJDIR)\main$O $(OBJDIR)\manifest$O $(OBJDIR)\markdown$O $(OBJDIR)\markdown_html$O $(OBJDIR)\md5$O $(OBJDIR)\merge$O $(OBJDIR)\merge3$O $(OBJDIR)\moderate$O $(OBJDIR)\name$O $(OBJDIR)\path$O $(OBJDIR)\piechart$O $(OBJDIR)\pivot$O $(OBJDIR)\popen$O $(OBJDIR)\pqueue$O $(OBJDIR)\printf$O $(OBJDIR)\publish$O $(OBJDIR)\purge$O $(OBJDIR)\rebuild$O $(OBJDIR)\regexp$O $(OBJDIR)\report$O $(OBJDIR)\rss$O $(OBJDIR)\schema$O $(OBJDIR)\search$O $(OBJDIR)\setup$O $(OBJDIR)\sha1$O $(OBJDIR)\shun$O $(OBJDIR)\sitemap$O $(OBJDIR)\skins$O $(OBJDIR)\sqlcmd$O $(OBJDIR)\stash$O $(OBJDIR)\stat$O $(OBJDIR)\statrep$O $(OBJDIR)\style$O $(OBJDIR)\sync$O $(OBJDIR)\tag$O $(OBJDIR)\tar$O $(OBJDIR)\th_main$O $(OBJDIR)\timeline$O $(OBJDIR)\tkt$O $(OBJDIR)\tktsetup$O $(OBJDIR)\undo$O $(OBJDIR)\unicode$O $(OBJDIR)\update$O $(OBJDIR)\url$O $(OBJDIR)\user$O $(OBJDIR)\utf8$O $(OBJDIR)\util$O $(OBJDIR)\verify$O $(OBJDIR)\vfile$O $(OBJDIR)\wiki$O $(OBJDIR)\wikiformat$O $(OBJDIR)\winfile$O $(OBJDIR)\winhttp$O $(OBJDIR)\wysiwyg$O $(OBJDIR)\xfer$O $(OBJDIR)\xfersetup$O $(OBJDIR)\zip$O $(OBJDIR)\shell$O $(OBJDIR)\sqlite3$O $(OBJDIR)\th$O $(OBJDIR)\th_lang$O 

Changes to win/Makefile.mingw.
1
2
3
4
5
6
7
8
9
10
11
12
13




14
15
16
17
18
19
20
#!/usr/bin/make
#
##############################################################################
# WARNING: DO NOT EDIT, AUTOMATICALLY GENERATED FILE (SEE "src/makemake.tcl")
##############################################################################
#
# This file is automatically generated.  Instead of editing this
# file, edit "makemake.tcl" then run "tclsh makemake.tcl"
# to regenerate this file.
#
# This is a makefile for use on Cygwin/Darwin/FreeBSD/Linux/Windows using
# MinGW or MinGW-w64.
#





#### Select one of MinGW, MinGW-w64 (32-bit) or MinGW-w64 (64-bit) compilers.
#    By default, this is an empty string (i.e. use the native compiler).
#
PREFIX =
# PREFIX = mingw32-
# PREFIX = i686-pc-mingw32-













>
>
>
>







1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
#!/usr/bin/make
#
##############################################################################
# WARNING: DO NOT EDIT, AUTOMATICALLY GENERATED FILE (SEE "src/makemake.tcl")
##############################################################################
#
# This file is automatically generated.  Instead of editing this
# file, edit "makemake.tcl" then run "tclsh makemake.tcl"
# to regenerate this file.
#
# This is a makefile for use on Cygwin/Darwin/FreeBSD/Linux/Windows using
# MinGW or MinGW-w64.
#
# Some of the special options which can be passed to make
#   USE_WINDOWS=1    if building under a windows command prompt
#   X64=1            if using an unprefixed 64-bit mingw compiler
#

#### Select one of MinGW, MinGW-w64 (32-bit) or MinGW-w64 (64-bit) compilers.
#    By default, this is an empty string (i.e. use the native compiler).
#
PREFIX =
# PREFIX = mingw32-
# PREFIX = i686-pc-mingw32-
50
51
52
53
54
55
56




57
58
59
60
61
62
63
#
# FOSSIL_ENABLE_SSL = 1

#### Automatically build OpenSSL when building Fossil (causes rebuild
#    issues when building incrementally).
#
# FOSSIL_BUILD_SSL = 1





#### Enable legacy treatment of mv/rm (skip checkout files)
#
# FOSSIL_ENABLE_LEGACY_MV_RM = 1

#### Enable TH1 scripts in embedded documentation files
#







>
>
>
>







54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
#
# FOSSIL_ENABLE_SSL = 1

#### Automatically build OpenSSL when building Fossil (causes rebuild
#    issues when building incrementally).
#
# FOSSIL_BUILD_SSL = 1

#### Enable relative paths in external diff/gdiff
#
# FOSSIL_ENABLE_EXEC_REL_PATHS = 1

#### Enable legacy treatment of mv/rm (skip checkout files)
#
# FOSSIL_ENABLE_LEGACY_MV_RM = 1

#### Enable TH1 scripts in embedded documentation files
#
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214








215
216
217
218
219
220
221

#### C Compile and options for use in building executables that
#    will run on the target platform.  This is usually the same
#    as BCC, unless you are cross-compiling.  This C compiler builds
#    the finished binary for fossil.  The BCC compiler above is used
#    for building intermediate code-generator tools.
#
TCC = $(PREFIX)gcc -Os -Wall

#### When not using the miniz compression library, zlib is required.
#
ifndef FOSSIL_ENABLE_MINIZ
TCC += -L$(ZLIBDIR) -I$(ZINCDIR)
endif

#### Add the necessary command line options to build with debugging
#    symbols, if enabled.
#
ifdef FOSSIL_ENABLE_SYMBOLS
TCC += -g








endif

#### Compile resources for use in building executables that will run
#    on the target platform.
#
RCC = $(PREFIX)windres -I$(SRCDIR)








|
<
<
<
<
<
<






>
>
>
>
>
>
>
>







203
204
205
206
207
208
209
210






211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231

#### C Compile and options for use in building executables that
#    will run on the target platform.  This is usually the same
#    as BCC, unless you are cross-compiling.  This C compiler builds
#    the finished binary for fossil.  The BCC compiler above is used
#    for building intermediate code-generator tools.
#
TCC = $(PREFIX)gcc -Wall







#### Add the necessary command line options to build with debugging
#    symbols, if enabled.
#
ifdef FOSSIL_ENABLE_SYMBOLS
TCC += -g
else
TCC += -Os
endif

#### When not using the miniz compression library, zlib is required.
#
ifndef FOSSIL_ENABLE_MINIZ
TCC += -L$(ZLIBDIR) -I$(ZINCDIR)
endif

#### Compile resources for use in building executables that will run
#    on the target platform.
#
RCC = $(PREFIX)windres -I$(SRCDIR)

253
254
255
256
257
258
259






260
261
262
263
264
265
266
endif

# With HTTPS support
ifdef FOSSIL_ENABLE_SSL
TCC += -DFOSSIL_ENABLE_SSL=1
RCC += -DFOSSIL_ENABLE_SSL=1
endif







# With legacy treatment of mv/rm
ifdef FOSSIL_ENABLE_LEGACY_MV_RM
TCC += -DFOSSIL_ENABLE_LEGACY_MV_RM=1
RCC += -DFOSSIL_ENABLE_LEGACY_MV_RM=1
endif








>
>
>
>
>
>







263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
endif

# With HTTPS support
ifdef FOSSIL_ENABLE_SSL
TCC += -DFOSSIL_ENABLE_SSL=1
RCC += -DFOSSIL_ENABLE_SSL=1
endif

# With relative paths in external diff/gdiff
ifdef FOSSIL_ENABLE_EXEC_REL_PATHS
TCC += -DFOSSIL_ENABLE_EXEC_REL_PATHS=1
RCC += -DFOSSIL_ENABLE_EXEC_REL_PATHS=1
endif

# With legacy treatment of mv/rm
ifdef FOSSIL_ENABLE_LEGACY_MV_RM
TCC += -DFOSSIL_ENABLE_LEGACY_MV_RM=1
RCC += -DFOSSIL_ENABLE_LEGACY_MV_RM=1
endif

2074
2075
2076
2077
2078
2079
2080


2081
2082
2083
2084
2085
2086
2087
                 -DSQLITE_THREADSAFE=0 \
                 -DSQLITE_DEFAULT_FILE_FORMAT=4 \
                 -DSQLITE_OMIT_DEPRECATED \
                 -DSQLITE_ENABLE_EXPLAIN_COMMENTS \
                 -DSQLITE_ENABLE_FTS4 \
                 -DSQLITE_ENABLE_FTS3_PARENTHESIS \
                 -DSQLITE_ENABLE_DBSTAT_VTAB \


                 -DSQLITE_WIN32_NO_ANSI \
                 -D_HAVE__MINGW_H \
                 -DSQLITE_USE_MALLOC_H \
                 -DSQLITE_USE_MSIZE

SHELL_OPTIONS = -Dmain=sqlite3_shell \
                -DSQLITE_OMIT_LOAD_EXTENSION=1 \







>
>







2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
                 -DSQLITE_THREADSAFE=0 \
                 -DSQLITE_DEFAULT_FILE_FORMAT=4 \
                 -DSQLITE_OMIT_DEPRECATED \
                 -DSQLITE_ENABLE_EXPLAIN_COMMENTS \
                 -DSQLITE_ENABLE_FTS4 \
                 -DSQLITE_ENABLE_FTS3_PARENTHESIS \
                 -DSQLITE_ENABLE_DBSTAT_VTAB \
                 -DSQLITE_ENABLE_JSON1 \
                 -DSQLITE_ENABLE_FTS5 \
                 -DSQLITE_WIN32_NO_ANSI \
                 -D_HAVE__MINGW_H \
                 -DSQLITE_USE_MALLOC_H \
                 -DSQLITE_USE_MSIZE

SHELL_OPTIONS = -Dmain=sqlite3_shell \
                -DSQLITE_OMIT_LOAD_EXTENSION=1 \
Changes to win/Makefile.mingw.mistachkin.
1
2
3
4
5
6
7
8
9
10
11
12
13




14
15
16
17
18
19
20
#!/usr/bin/make
#
##############################################################################
# WARNING: DO NOT EDIT, AUTOMATICALLY GENERATED FILE (SEE "src/makemake.tcl")
##############################################################################
#
# This file is automatically generated.  Instead of editing this
# file, edit "makemake.tcl" then run "tclsh makemake.tcl"
# to regenerate this file.
#
# This is a makefile for use on Cygwin/Darwin/FreeBSD/Linux/Windows using
# MinGW or MinGW-w64.
#





#### Select one of MinGW, MinGW-w64 (32-bit) or MinGW-w64 (64-bit) compilers.
#    By default, this is an empty string (i.e. use the native compiler).
#
PREFIX =
# PREFIX = mingw32-
# PREFIX = i686-pc-mingw32-













>
>
>
>







1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
#!/usr/bin/make
#
##############################################################################
# WARNING: DO NOT EDIT, AUTOMATICALLY GENERATED FILE (SEE "src/makemake.tcl")
##############################################################################
#
# This file is automatically generated.  Instead of editing this
# file, edit "makemake.tcl" then run "tclsh makemake.tcl"
# to regenerate this file.
#
# This is a makefile for use on Cygwin/Darwin/FreeBSD/Linux/Windows using
# MinGW or MinGW-w64.
#
# Some of the special options which can be passed to make
#   USE_WINDOWS=1    if building under a windows command prompt
#   X64=1            if using an unprefixed 64-bit mingw compiler
#

#### Select one of MinGW, MinGW-w64 (32-bit) or MinGW-w64 (64-bit) compilers.
#    By default, this is an empty string (i.e. use the native compiler).
#
PREFIX =
# PREFIX = mingw32-
# PREFIX = i686-pc-mingw32-
50
51
52
53
54
55
56




57
58
59
60
61
62
63
#
FOSSIL_ENABLE_SSL = 1

#### Automatically build OpenSSL when building Fossil (causes rebuild
#    issues when building incrementally).
#
# FOSSIL_BUILD_SSL = 1





#### Enable legacy treatment of mv/rm (skip checkout files)
#
FOSSIL_ENABLE_LEGACY_MV_RM = 1

#### Enable TH1 scripts in embedded documentation files
#







>
>
>
>







54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
#
FOSSIL_ENABLE_SSL = 1

#### Automatically build OpenSSL when building Fossil (causes rebuild
#    issues when building incrementally).
#
# FOSSIL_BUILD_SSL = 1

#### Enable relative paths in external diff/gdiff
#
# FOSSIL_ENABLE_EXEC_REL_PATHS = 1

#### Enable legacy treatment of mv/rm (skip checkout files)
#
FOSSIL_ENABLE_LEGACY_MV_RM = 1

#### Enable TH1 scripts in embedded documentation files
#
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214








215
216
217
218
219
220
221

#### C Compile and options for use in building executables that
#    will run on the target platform.  This is usually the same
#    as BCC, unless you are cross-compiling.  This C compiler builds
#    the finished binary for fossil.  The BCC compiler above is used
#    for building intermediate code-generator tools.
#
TCC = $(PREFIX)gcc -Os -Wall

#### When not using the miniz compression library, zlib is required.
#
ifndef FOSSIL_ENABLE_MINIZ
TCC += -L$(ZLIBDIR) -I$(ZINCDIR)
endif

#### Add the necessary command line options to build with debugging
#    symbols, if enabled.
#
ifdef FOSSIL_ENABLE_SYMBOLS
TCC += -g








endif

#### Compile resources for use in building executables that will run
#    on the target platform.
#
RCC = $(PREFIX)windres -I$(SRCDIR)








|
<
<
<
<
<
<






>
>
>
>
>
>
>
>







203
204
205
206
207
208
209
210






211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231

#### C Compile and options for use in building executables that
#    will run on the target platform.  This is usually the same
#    as BCC, unless you are cross-compiling.  This C compiler builds
#    the finished binary for fossil.  The BCC compiler above is used
#    for building intermediate code-generator tools.
#
TCC = $(PREFIX)gcc -Wall







#### Add the necessary command line options to build with debugging
#    symbols, if enabled.
#
ifdef FOSSIL_ENABLE_SYMBOLS
TCC += -g
else
TCC += -Os
endif

#### When not using the miniz compression library, zlib is required.
#
ifndef FOSSIL_ENABLE_MINIZ
TCC += -L$(ZLIBDIR) -I$(ZINCDIR)
endif

#### Compile resources for use in building executables that will run
#    on the target platform.
#
RCC = $(PREFIX)windres -I$(SRCDIR)

253
254
255
256
257
258
259






260
261
262
263
264
265
266
endif

# With HTTPS support
ifdef FOSSIL_ENABLE_SSL
TCC += -DFOSSIL_ENABLE_SSL=1
RCC += -DFOSSIL_ENABLE_SSL=1
endif







# With legacy treatment of mv/rm
ifdef FOSSIL_ENABLE_LEGACY_MV_RM
TCC += -DFOSSIL_ENABLE_LEGACY_MV_RM=1
RCC += -DFOSSIL_ENABLE_LEGACY_MV_RM=1
endif








>
>
>
>
>
>







263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
endif

# With HTTPS support
ifdef FOSSIL_ENABLE_SSL
TCC += -DFOSSIL_ENABLE_SSL=1
RCC += -DFOSSIL_ENABLE_SSL=1
endif

# With relative paths in external diff/gdiff
ifdef FOSSIL_ENABLE_EXEC_REL_PATHS
TCC += -DFOSSIL_ENABLE_EXEC_REL_PATHS=1
RCC += -DFOSSIL_ENABLE_EXEC_REL_PATHS=1
endif

# With legacy treatment of mv/rm
ifdef FOSSIL_ENABLE_LEGACY_MV_RM
TCC += -DFOSSIL_ENABLE_LEGACY_MV_RM=1
RCC += -DFOSSIL_ENABLE_LEGACY_MV_RM=1
endif

2074
2075
2076
2077
2078
2079
2080


2081
2082
2083
2084
2085
2086
2087
                 -DSQLITE_THREADSAFE=0 \
                 -DSQLITE_DEFAULT_FILE_FORMAT=4 \
                 -DSQLITE_OMIT_DEPRECATED \
                 -DSQLITE_ENABLE_EXPLAIN_COMMENTS \
                 -DSQLITE_ENABLE_FTS4 \
                 -DSQLITE_ENABLE_FTS3_PARENTHESIS \
                 -DSQLITE_ENABLE_DBSTAT_VTAB \


                 -DSQLITE_WIN32_NO_ANSI \
                 -D_HAVE__MINGW_H \
                 -DSQLITE_USE_MALLOC_H \
                 -DSQLITE_USE_MSIZE

SHELL_OPTIONS = -Dmain=sqlite3_shell \
                -DSQLITE_OMIT_LOAD_EXTENSION=1 \







>
>







2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
                 -DSQLITE_THREADSAFE=0 \
                 -DSQLITE_DEFAULT_FILE_FORMAT=4 \
                 -DSQLITE_OMIT_DEPRECATED \
                 -DSQLITE_ENABLE_EXPLAIN_COMMENTS \
                 -DSQLITE_ENABLE_FTS4 \
                 -DSQLITE_ENABLE_FTS3_PARENTHESIS \
                 -DSQLITE_ENABLE_DBSTAT_VTAB \
                 -DSQLITE_ENABLE_JSON1 \
                 -DSQLITE_ENABLE_FTS5 \
                 -DSQLITE_WIN32_NO_ANSI \
                 -D_HAVE__MINGW_H \
                 -DSQLITE_USE_MALLOC_H \
                 -DSQLITE_USE_MSIZE

SHELL_OPTIONS = -Dmain=sqlite3_shell \
                -DSQLITE_OMIT_LOAD_EXTENSION=1 \
Changes to win/Makefile.msc.
44
45
46
47
48
49
50





51
52
53
54
55
56
57
FOSSIL_BUILD_ZLIB = 1
!endif

# Link everything except SQLite dynamically?
!ifndef FOSSIL_DYNAMIC_BUILD
FOSSIL_DYNAMIC_BUILD = 0
!endif






# Enable the JSON API?
!ifndef FOSSIL_ENABLE_JSON
FOSSIL_ENABLE_JSON = 0
!endif

# Enable legacy treatment of the mv/rm commands?







>
>
>
>
>







44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
FOSSIL_BUILD_ZLIB = 1
!endif

# Link everything except SQLite dynamically?
!ifndef FOSSIL_DYNAMIC_BUILD
FOSSIL_DYNAMIC_BUILD = 0
!endif

# Enable relative paths in external diff/gdiff?
!ifndef FOSSIL_ENABLE_EXEC_REL_PATHS
FOSSIL_ENABLE_EXEC_REL_PATHS = 0
!endif

# Enable the JSON API?
!ifndef FOSSIL_ENABLE_JSON
FOSSIL_ENABLE_JSON = 0
!endif

# Enable legacy treatment of the mv/rm commands?
262
263
264
265
266
267
268





269
270
271
272
273
274
275

!if $(FOSSIL_ENABLE_SSL)!=0
TCC       = $(TCC) /DFOSSIL_ENABLE_SSL=1
RCC       = $(RCC) /DFOSSIL_ENABLE_SSL=1
LIBS      = $(LIBS) $(SSLLIB)
LIBDIR    = $(LIBDIR) /LIBPATH:$(SSLLIBDIR)
!endif






!if $(FOSSIL_ENABLE_LEGACY_MV_RM)!=0
TCC       = $(TCC) /DFOSSIL_ENABLE_LEGACY_MV_RM=1
RCC       = $(RCC) /DFOSSIL_ENABLE_LEGACY_MV_RM=1
!endif

!if $(FOSSIL_ENABLE_TH1_DOCS)!=0







>
>
>
>
>







267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285

!if $(FOSSIL_ENABLE_SSL)!=0
TCC       = $(TCC) /DFOSSIL_ENABLE_SSL=1
RCC       = $(RCC) /DFOSSIL_ENABLE_SSL=1
LIBS      = $(LIBS) $(SSLLIB)
LIBDIR    = $(LIBDIR) /LIBPATH:$(SSLLIBDIR)
!endif

!if $(FOSSIL_ENABLE_EXEC_REL_PATHS)!=0
TCC       = $(TCC) /DFOSSIL_ENABLE_EXEC_REL_PATHS=1
RCC       = $(RCC) /DFOSSIL_ENABLE_EXEC_REL_PATHS=1
!endif

!if $(FOSSIL_ENABLE_LEGACY_MV_RM)!=0
TCC       = $(TCC) /DFOSSIL_ENABLE_LEGACY_MV_RM=1
RCC       = $(RCC) /DFOSSIL_ENABLE_LEGACY_MV_RM=1
!endif

!if $(FOSSIL_ENABLE_TH1_DOCS)!=0
299
300
301
302
303
304
305


306
307
308
309
310
311
312
                 /DSQLITE_THREADSAFE=0 \
                 /DSQLITE_DEFAULT_FILE_FORMAT=4 \
                 /DSQLITE_OMIT_DEPRECATED \
                 /DSQLITE_ENABLE_EXPLAIN_COMMENTS \
                 /DSQLITE_ENABLE_FTS4 \
                 /DSQLITE_ENABLE_FTS3_PARENTHESIS \
                 /DSQLITE_ENABLE_DBSTAT_VTAB \


                 /DSQLITE_WIN32_NO_ANSI

SHELL_OPTIONS = /Dmain=sqlite3_shell \
                /DSQLITE_OMIT_LOAD_EXTENSION=1 \
                /DUSE_SYSTEM_SQLITE=$(USE_SYSTEM_SQLITE) \
                /DSQLITE_SHELL_DBNAME_PROC=fossil_open \
                /Daccess=file_access \







>
>







309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
                 /DSQLITE_THREADSAFE=0 \
                 /DSQLITE_DEFAULT_FILE_FORMAT=4 \
                 /DSQLITE_OMIT_DEPRECATED \
                 /DSQLITE_ENABLE_EXPLAIN_COMMENTS \
                 /DSQLITE_ENABLE_FTS4 \
                 /DSQLITE_ENABLE_FTS3_PARENTHESIS \
                 /DSQLITE_ENABLE_DBSTAT_VTAB \
                 /DSQLITE_ENABLE_JSON1 \
                 /DSQLITE_ENABLE_FTS5 \
                 /DSQLITE_WIN32_NO_ANSI

SHELL_OPTIONS = /Dmain=sqlite3_shell \
                /DSQLITE_OMIT_LOAD_EXTENSION=1 \
                /DUSE_SYSTEM_SQLITE=$(USE_SYSTEM_SQLITE) \
                /DSQLITE_SHELL_DBNAME_PROC=fossil_open \
                /Daccess=file_access \
Changes to win/fossil.rc.
123
124
125
126
127
128
129





130
131
132
133
134
135
136
      VALUE "SslEnabled", "Yes, " OPENSSL_VERSION_TEXT "\0"
#endif /* defined(FOSSIL_ENABLE_SSL) */
#if defined(FOSSIL_ENABLE_LEGACY_MV_RM)
      VALUE "LegacyMvRm", "Yes\0"
#else
      VALUE "LegacyMvRm", "No\0"
#endif /* defined(FOSSIL_ENABLE_LEGACY_MV_RM) */





#if defined(FOSSIL_ENABLE_TH1_DOCS)
      VALUE "Th1Docs", "Yes\0"
#else
      VALUE "Th1Docs", "No\0"
#endif /* defined(FOSSIL_ENABLE_TH1_DOCS) */
#if defined(FOSSIL_ENABLE_TH1_HOOKS)
      VALUE "Th1Hooks", "Yes\0"







>
>
>
>
>







123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
      VALUE "SslEnabled", "Yes, " OPENSSL_VERSION_TEXT "\0"
#endif /* defined(FOSSIL_ENABLE_SSL) */
#if defined(FOSSIL_ENABLE_LEGACY_MV_RM)
      VALUE "LegacyMvRm", "Yes\0"
#else
      VALUE "LegacyMvRm", "No\0"
#endif /* defined(FOSSIL_ENABLE_LEGACY_MV_RM) */
#if defined(FOSSIL_ENABLE_EXEC_REL_PATHS)
      VALUE "ExecRelPaths", "Yes\0"
#else
      VALUE "ExecRelPaths", "No\0"
#endif /* defined(FOSSIL_ENABLE_EXEC_REL_PATHS) */
#if defined(FOSSIL_ENABLE_TH1_DOCS)
      VALUE "Th1Docs", "Yes\0"
#else
      VALUE "Th1Docs", "No\0"
#endif /* defined(FOSSIL_ENABLE_TH1_DOCS) */
#if defined(FOSSIL_ENABLE_TH1_HOOKS)
      VALUE "Th1Hooks", "Yes\0"
Changes to www/changes.wiki.
1
2
3
4

5
6
7
8
9
10




























11
12
13
14
15
16
17
<title>Change Log</title>

<h2>Changes for Version 1.34 (2015-??-??)</h2>
  *  Fix --hard option to mv/rm to enable them to work properly with certain

     relative paths.
  *  Add minimal 'lsearch' command to TH1. Only exact case-sensitive matching
     is supported.
  *  Add 'glob_match' command to TH1.
  *  Update internal Unicode character tables, used in regular expression
     handling, from version 7.0 to 8.0.





























<h2>Changes for Version 1.33 (2015-05-23)</h2>
  *  Improved fork detection on [/help?cmd=update|fossil update],
     [/help?cmd=status|fossil status] and related commands.
  *  Change the default skin to what used to be called "San Francisco Modern".
  *  Add the [/repo-tabsize] web page
  *  Add [/help?cmd=import|fossil import --svn], for importing a subversion


|
|
>
|
<
<
<


>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>







1
2
3
4
5
6



7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
<title>Change Log</title>

<h2>Changes for Version 1.34 (2015-11-02)</h2>

  *  Make the [/help?cmd=clean|fossil clean] command undoable for files less
     than 10MiB.



  *  Update internal Unicode character tables, used in regular expression
     handling, from version 7.0 to 8.0.
  *  Add the new [/help?cmd=amend|amend] command which is used to modify
     tags of a "check-in".
  *  Fix bug in [/help?cmd=import|import] command, handling version 3 of
     the svndump format for subversion.
  *  Add the [/help?cmd=all|all cache] command.
  *  TH1 enhancements:
     <ul><li>Add minimal <nowiki>[lsearch]</nowiki> command. Only exact
     case-sensitive matching is supported.</li>
     <li>Add the <nowiki>[glob_match]</nowiki>, <nowiki>[markdown]</nowiki>,
     <nowiki>[dir]</nowiki>, and <nowiki>[encode64]</nowiki> commands.</li>
     <li>Add the <nowiki>[tclIsSafe] and [tclMakeSafe]</nowiki> commands to
     the Tcl integration subsystem.</li>
     <li>Add 'double', 'integer', and 'list' classes to the
     <nowiki>[string is]</nowiki> command.</li>
     </ul>
  *  Add the --undo option to the [/help?cmd=diff|diff] command.
  *  Build-in Antirez's "linenoise" command-line editing library for use with
     the [/help?cmd=sqlite3|fossil sql] command on Unix platforms.
  *  Add [/help?cmd=stash|stash cat] as an alias for the
     [/help?cmd=stash|stash show] command.
  *  Automatically pull before [/help?cmd=merge|fossil merge] when auto-sync
     is enabled.
  *  Fix --hard option to [/help?cmd=mv|fossil mv] and [/help?cmd=rm|fossil rm]
     to enable them to work properly with certain relative paths.
  *  Change the mimetype for ".n" and ".man" files to text/plain.
  *  Display improvements in the [/help?cmd=bisect|fossil bisect chart] command.
  *  Updated the built-in SQLite to version 3.9.1 and activated JSON1 and FTS5
     support (both currently unused within Fossil).

<h2>Changes for Version 1.33 (2015-05-23)</h2>
  *  Improved fork detection on [/help?cmd=update|fossil update],
     [/help?cmd=status|fossil status] and related commands.
  *  Change the default skin to what used to be called "San Francisco Modern".
  *  Add the [/repo-tabsize] web page
  *  Add [/help?cmd=import|fossil import --svn], for importing a subversion
Changes to www/checkin_names.wiki.
13
14
15
16
17
18
19

20
21
22
23
24
25
26
<li> <b>root :</b> <i>branchname</i>
<li> Special names:
<ul>
<li> <b>tip</b>
<li> <b>current</b>
<li> <b>next</b>
<li> <b>previous</b> or <b>prev</b>

</ul>
</ul>
</td></tr>
</table>
Many Fossil [/help|commands] and [./webui.wiki | web-interface] URLs accept
check-in names as an argument.  For example, the "[/help/info|info]" command
accepts an optional check-in name to identify the specific checkout







>







13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
<li> <b>root :</b> <i>branchname</i>
<li> Special names:
<ul>
<li> <b>tip</b>
<li> <b>current</b>
<li> <b>next</b>
<li> <b>previous</b> or <b>prev</b>
<li> <b>ckout</b> for embedded docs
</ul>
</ul>
</td></tr>
</table>
Many Fossil [/help|commands] and [./webui.wiki | web-interface] URLs accept
check-in names as an argument.  For example, the "[/help/info|info]" command
accepts an optional check-in name to identify the specific checkout
214
215
216
217
218
219
220






221
222
223
224
225
226
227
equivalent to the timestamp tag "5000-01-01".

If the command is being run from a working check-out (not against a bare
repository) then a few extra tags apply.  The "current" tag means the
current check-out.  The "next" tag means the youngest child of the
current check-out.  And the "previous" or "prev" tag means the primary
(non-merge) parent of the current check-out.







<h2>Additional Examples</h2>

To view the changes in the most recent check-in prior to the version currently
checked out:

<blockquote><pre>







>
>
>
>
>
>







215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
equivalent to the timestamp tag "5000-01-01".

If the command is being run from a working check-out (not against a bare
repository) then a few extra tags apply.  The "current" tag means the
current check-out.  The "next" tag means the youngest child of the
current check-out.  And the "previous" or "prev" tag means the primary
(non-merge) parent of the current check-out.

For embedded documentation, the tag "ckout" means the version as present in
the local source tree on disk, provided that the web server is started using
"fossil ui" or "fossil server" from within the source tree. This tag can be 
used to preview local changes to documentation before committing them. It does
not apply to CLI commands.

<h2>Additional Examples</h2>

To view the changes in the most recent check-in prior to the version currently
checked out:

<blockquote><pre>
Changes to www/copyright-release.html.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
<h1 align="center">
Fossil SCM Contributor Agreement
</h1>

<p>
This agreement applies to your contribution of material to the
Fossil Software Configuration Management System ("Fossil") that is
managed by Hipp, Wyrick &amp; Company, Inc. ("Hwaci") and
sets out the intellectual property rights you grant to Hwaci in the
contributed material.
The terms "contribution" and "contributed material" mean any source code, 
object code, patch, tool, sample, graphic, specification, manual, 
documentation, or any other material posted, submitted, or uploaded by 
you to the Fossil project.
 The term "you" means the person identified
and signing at the bottom of this document.  If your contribution
is on behalf of a company, the term "you" also means the company
identified in the signature area below.

<ol>










|
|
|







1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
<h1 align="center">
Fossil SCM Contributor Agreement
</h1>

<p>
This agreement applies to your contribution of material to the
Fossil Software Configuration Management System ("Fossil") that is
managed by Hipp, Wyrick &amp; Company, Inc. ("Hwaci") and
sets out the intellectual property rights you grant to Hwaci in the
contributed material.
The terms "contribution" and "contributed material" mean any source code,
object code, patch, tool, sample, graphic, specification, manual,
documentation, or any other material posted, submitted, or uploaded by
you to the Fossil project.
 The term "you" means the person identified
and signing at the bottom of this document.  If your contribution
is on behalf of a company, the term "you" also means the company
identified in the signature area below.

<ol>
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
     contribution as if each of us were the sole owners, and if one of us
     makes a derivative work of your contribution, the one who makes
     (or has made) the derivative work will be the sole owner of that
     derivative work.
<li> You agree that you will not assert any moral rights in your
     contribution against Hwaci, Hwaci's licensees or transferees, or
     any other user or consumer of your contribution.
<li> You agree that Hwaci may register a copyright in your contribution and 
     exercise all ownership rights associated with it.
<li> You agree that neither you nor Hwaci has any duty to consult with,
     obtain the consent of, or pay or render an accounting to the other
     for any use or distribution of your contribution.
</ul>

<li><p>







|







31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
     contribution as if each of us were the sole owners, and if one of us
     makes a derivative work of your contribution, the one who makes
     (or has made) the derivative work will be the sole owner of that
     derivative work.
<li> You agree that you will not assert any moral rights in your
     contribution against Hwaci, Hwaci's licensees or transferees, or
     any other user or consumer of your contribution.
<li> You agree that Hwaci may register a copyright in your contribution and
     exercise all ownership rights associated with it.
<li> You agree that neither you nor Hwaci has any duty to consult with,
     obtain the consent of, or pay or render an accounting to the other
     for any use or distribution of your contribution.
</ul>

<li><p>
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
     company (if applicable).
</ul>
</ol>

<p>By filling in the following information and signing your name,
you agree to be bound by all of the terms
set forth in this agreement.  Please print clearly.</p>
 
<center>
<p><table width="80%" border="1" cellpadding="0" cellspacing="0">
<tr><td width="20%" valign="top">Your name &amp email:</td><td width="80%">

    <!-- Replace this line with your name and email --> &nbsp;<p>&nbsp;

</td></tr>







|







71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
     company (if applicable).
</ul>
</ol>

<p>By filling in the following information and signing your name,
you agree to be bound by all of the terms
set forth in this agreement.  Please print clearly.</p>

<center>
<p><table width="80%" border="1" cellpadding="0" cellspacing="0">
<tr><td width="20%" valign="top">Your name &amp email:</td><td width="80%">

    <!-- Replace this line with your name and email --> &nbsp;<p>&nbsp;

</td></tr>
Changes to www/customgraph.md.
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
Fossil includes several options for changing the graph's style without having
to delve into CSS. These can be found in the details.txt file of your skin or
under Admin/Skins/Details in the web UI.

*   ###`timeline-arrowheads`

    Set this to `0` to hide arrowheads on primary child lines.
    
*   ###`timeline-circle-nodes`

    Set this to `1` to make check-in nodes circular instead of square.

*   ###`timeline-color-graph-lines`

    Set this to `1` to colorize primary child lines.







|







8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
Fossil includes several options for changing the graph's style without having
to delve into CSS. These can be found in the details.txt file of your skin or
under Admin/Skins/Details in the web UI.

*   ###`timeline-arrowheads`

    Set this to `0` to hide arrowheads on primary child lines.

*   ###`timeline-circle-nodes`

    Set this to `1` to make check-in nodes circular instead of square.

*   ###`timeline-color-graph-lines`

    Set this to `1` to colorize primary child lines.
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60

## <a id="pos-elems"></a>Positioning Elements

These elements aren't intended to be seen. They're only used to help position
the graph and its visible elements.

*   ###<a id="tl-canvas"></a>`.tl-canvas`
    
    Set the left and right margins on this class to give the desired amount
    of space between the graph and its adjacent columns in the timeline.
  
    #### Additional Classes
  
    * `.sel`: See [`.tl-node`](#tl-node) for more information.

*   ###<a id="tl-rail"></a>`.tl-rail`

    Think of rails as invisible vertical lines on which check-in nodes are
    placed. The more simultaneous branches in a graph, the more rails required
    to draw it. Setting the `width` property on this class determines the







|


|

|







41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60

## <a id="pos-elems"></a>Positioning Elements

These elements aren't intended to be seen. They're only used to help position
the graph and its visible elements.

*   ###<a id="tl-canvas"></a>`.tl-canvas`

    Set the left and right margins on this class to give the desired amount
    of space between the graph and its adjacent columns in the timeline.

    #### Additional Classes

    * `.sel`: See [`.tl-node`](#tl-node) for more information.

*   ###<a id="tl-rail"></a>`.tl-rail`

    Think of rails as invisible vertical lines on which check-in nodes are
    placed. The more simultaneous branches in a graph, the more rails required
    to draw it. Setting the `width` property on this class determines the
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
These are the elements you can actually see on the timeline graph: the nodes,
arrows, and lines. Each of these elements may also have additional classes
attached to them, depending on their context.

*   ###<a id="tl-node"></a>`.tl-node`

    A node exists for each check-in in the timeline.
  
    #### Additional Classes
    
    *   `.leaf`: Specifies that the check-in is a leaf (i.e. that it has no
        children in the same branch).
    
    *   `.merge`: Specifies that the check-in contains a merge.
    
    *   `.sel`: When the user clicks a node to designate it as the beginning
        of a diff, this class is added to both the node itself and the
        [`.tl-canvas`](#tl-canvas) element. The class is removed from both
        elements when the node is clicked again.

*   ###<a id="tl-arrow"></a>`.tl-arrow`

    Arrows point from parent nodes to their children. Technically, this
    class is just for the arrowhead. The rest of the arrow is composed
    of [`.tl-line`](#tl-line) elements.

    There are six additional classes that are used to distinguish the different
    types of arrows. However, only these combinations are valid:
    
    *   `.u`: Up arrow that points to a child from its primary parent.
    
    *   `.u.sm`: Smaller up arrow, used when there is limited space between
        parent and child nodes.
    
    *   `.merge.l` or `.merge.r`: Merge arrow pointing either to the left or
        right.
    
    *   `.warp`: A timewarped arrow (always points to the right), used when a
        misconfigured clock makes a check-in appear to have occurred before its
        parent ([example](https://www.sqlite.org/src/timeline?c=2010-09-29&nd)).
    
*   ###<a id="tl-line"></a>`.tl-line`

    Along with arrows, lines connect parent and child nodes. Line thickness is
    determined by the `width` property, regardless of whether the line is
    horizontal or vertical. You can also use borders to create special line
    styles. Here's a CSS snippet for making dotted merge lines:

        .tl-line.merge {
          width: 0;
          background: transparent;
          border: 0 dotted #000;
        }
        .tl-line.merge.h {
          border-top-width: 1px;
        }
        .tl-line.merge.v {
          border-left-width: 1px;
        }

    #### Additional Classes
    
    *   `.merge`: A merge line.
    
    *   `.h` or `.v`: Horizontal or vertical.
    
    *   `.warp`: A timewarped line.


## <a id="default-css"></a>Default Timeline Graph CSS

    .tl-canvas {
      margin: 0 6px 0 10px;







|

|


|

|













|

|


|


|



|




















|

|

|







82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
These are the elements you can actually see on the timeline graph: the nodes,
arrows, and lines. Each of these elements may also have additional classes
attached to them, depending on their context.

*   ###<a id="tl-node"></a>`.tl-node`

    A node exists for each check-in in the timeline.

    #### Additional Classes

    *   `.leaf`: Specifies that the check-in is a leaf (i.e. that it has no
        children in the same branch).

    *   `.merge`: Specifies that the check-in contains a merge.

    *   `.sel`: When the user clicks a node to designate it as the beginning
        of a diff, this class is added to both the node itself and the
        [`.tl-canvas`](#tl-canvas) element. The class is removed from both
        elements when the node is clicked again.

*   ###<a id="tl-arrow"></a>`.tl-arrow`

    Arrows point from parent nodes to their children. Technically, this
    class is just for the arrowhead. The rest of the arrow is composed
    of [`.tl-line`](#tl-line) elements.

    There are six additional classes that are used to distinguish the different
    types of arrows. However, only these combinations are valid:

    *   `.u`: Up arrow that points to a child from its primary parent.

    *   `.u.sm`: Smaller up arrow, used when there is limited space between
        parent and child nodes.

    *   `.merge.l` or `.merge.r`: Merge arrow pointing either to the left or
        right.

    *   `.warp`: A timewarped arrow (always points to the right), used when a
        misconfigured clock makes a check-in appear to have occurred before its
        parent ([example](https://www.sqlite.org/src/timeline?c=2010-09-29&nd)).

*   ###<a id="tl-line"></a>`.tl-line`

    Along with arrows, lines connect parent and child nodes. Line thickness is
    determined by the `width` property, regardless of whether the line is
    horizontal or vertical. You can also use borders to create special line
    styles. Here's a CSS snippet for making dotted merge lines:

        .tl-line.merge {
          width: 0;
          background: transparent;
          border: 0 dotted #000;
        }
        .tl-line.merge.h {
          border-top-width: 1px;
        }
        .tl-line.merge.v {
          border-left-width: 1px;
        }

    #### Additional Classes

    *   `.merge`: A merge line.

    *   `.h` or `.v`: Horizontal or vertical.

    *   `.warp`: A timewarped line.


## <a id="default-css"></a>Default Timeline Graph CSS

    .tl-canvas {
      margin: 0 6px 0 10px;
Changes to www/customskin.md.
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
"footer.txt", and "header.txt",
that describe the CSS, rendering options,
footer, and header for that skin, respectively.

The skin of a repository can be changed to any of the built-in skins using
the web interface by going to the /setup_skin web page (requires Admin
privileges) and clicking the appropriate button.  Or, the --skin command
line option can be used for the 
[fossil ui](../../../help?cmd=ui) or
[fossil server](../../../help?cmd=server) commands to force that particular
instance of Fossil to use the specified built-in skin.

Sharing Skins
-------------

The skin of a repository is not part of the versioned state and does not
"push" or "pull" like checked-in files.  The skin is local to the 
repository.  However, skins can be shared between repositories using
the [fossil config](../../../help?cmd=configuration) command.
The "fossil config push skin" command will send the local skin to a remote
repository and the "fossil config pull skin" command will import a skin
from a remote repository.  The "fossil config export skin FILENAME"
will export the skin for a repository into a file FILENAME.  This file
can then be imported into a different repository using the







|








|







52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
"footer.txt", and "header.txt",
that describe the CSS, rendering options,
footer, and header for that skin, respectively.

The skin of a repository can be changed to any of the built-in skins using
the web interface by going to the /setup_skin web page (requires Admin
privileges) and clicking the appropriate button.  Or, the --skin command
line option can be used for the
[fossil ui](../../../help?cmd=ui) or
[fossil server](../../../help?cmd=server) commands to force that particular
instance of Fossil to use the specified built-in skin.

Sharing Skins
-------------

The skin of a repository is not part of the versioned state and does not
"push" or "pull" like checked-in files.  The skin is local to the
repository.  However, skins can be shared between repositories using
the [fossil config](../../../help?cmd=configuration) command.
The "fossil config push skin" command will send the local skin to a remote
repository and the "fossil config pull skin" command will import a skin
from a remote repository.  The "fossil config export skin FILENAME"
will export the skin for a repository into a file FILENAME.  This file
can then be imported into a different repository using the
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
the skin of the repository from which it was cloned.

Header And Footer Processing
----------------------------

The header.txt and footer.txt files of a scan are merely the HTML text
of the header and footer.  Except, before being prepended and appended to
the content, the header and footer text are run through a 
[TH1 interpreter](./th1.md) that might adjust the text as follows:

  *  All text within &lt;th1&gt;...&lt;/th1&gt; is elided from the
     output and that text is instead run as a TH1 script.  That TH1
     script has the opportunity to insert new text in place of itself,
     or to inhibit or enable the output of subsequent text.








|







91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
the skin of the repository from which it was cloned.

Header And Footer Processing
----------------------------

The header.txt and footer.txt files of a scan are merely the HTML text
of the header and footer.  Except, before being prepended and appended to
the content, the header and footer text are run through a
[TH1 interpreter](./th1.md) that might adjust the text as follows:

  *  All text within &lt;th1&gt;...&lt;/th1&gt; is elided from the
     output and that text is instead run as a TH1 script.  That TH1
     script has the opportunity to insert new text in place of itself,
     or to inhibit or enable the output of subsequent text.

136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
and for all scripts contained within them both.  Hence, any global
TH1 variables that are set by the header are available to the footer.

TH1 Variables
-------------

Before expanding the TH1 within the header and footer, Fossil first
initializes a number of TH1 variables to values that depend on 
respository settings and the specific page being generated.

   *   **project_name** - The project_name variable is filled with the
       name of the project as configured under the Admin/Configuration
       menu.

   *   **title** - The title variable holds the title of the page being







|







136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
and for all scripts contained within them both.  Hence, any global
TH1 variables that are set by the header are available to the footer.

TH1 Variables
-------------

Before expanding the TH1 within the header and footer, Fossil first
initializes a number of TH1 variables to values that depend on
respository settings and the specific page being generated.

   *   **project_name** - The project_name variable is filled with the
       name of the project as configured under the Admin/Configuration
       menu.

   *   **title** - The title variable holds the title of the page being
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
   *   **secureurl** - The same as $baseurl except that if the scheme is
                       "http:" it is changed to "https:"

   *   **home** - The $baseurl without the scheme and hostname.  For example,
       if the $baseurl is "http://projectX.com/cgi-bin/fossil" then the
       $home will be just "/cgi-bin/fossil".

   *   **index_page** - The landing page URI as 
       specified by the Admin/Configuration setup page.

   *   **current_page** - The name of the page currently being processed,
       without the leading "/" and without query parameters.
       Examples:  "timeline", "doc/trunk/README.txt", "wiki".  

   *   **csrf_token** - A token used to prevent cross-site request forgery.

   *   **release_version** - The release version of Fossil.  Ex: "1.31"

   *   **manifest_version** - A prefix on the SHA1 check-in hash of the
       specific version of fossil that is running.  Ex: "\[47bb6432a1\]"







|




|







160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
   *   **secureurl** - The same as $baseurl except that if the scheme is
                       "http:" it is changed to "https:"

   *   **home** - The $baseurl without the scheme and hostname.  For example,
       if the $baseurl is "http://projectX.com/cgi-bin/fossil" then the
       $home will be just "/cgi-bin/fossil".

   *   **index_page** - The landing page URI as
       specified by the Admin/Configuration setup page.

   *   **current_page** - The name of the page currently being processed,
       without the leading "/" and without query parameters.
       Examples:  "timeline", "doc/trunk/README.txt", "wiki".

   *   **csrf_token** - A token used to prevent cross-site request forgery.

   *   **release_version** - The release version of Fossil.  Ex: "1.31"

   *   **manifest_version** - A prefix on the SHA1 check-in hash of the
       specific version of fossil that is running.  Ex: "\[47bb6432a1\]"
Changes to www/fossil_prompt.sh.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45

#-------------------------------------------------------------------------
#   get_fossil_data()
#
# If the current directory is part of a fossil checkout, then populate
# a series of global variables based on the current state of that
# checkout. Variables are populated based on the output of the [fossil info]
# command.
#
# If the current directory is not part of a fossil checkout, set global
# variable $fossil_info_project_name to an empty string and return.
#
function get_fossil_data() { 
  fossil_info_project_name=""
  eval `get_fossil_data2`
}
function get_fossil_data2() {
  fossil info 2> /dev/null | sed 's/"//g'|grep "^[^ ]*:" | while read LINE ; do 
    local field=`echo $LINE | sed 's/:.*$//' | sed 's/-/_/'`
    local value=`echo $LINE | sed 's/^[^ ]*: *//'`
    echo fossil_info_${field}=\"${value}\"
  done
}

#-------------------------------------------------------------------------
#   set_prompt()
#
# Set the PS1 variable. If the current directory is part of a fossil
# checkout then the prompt contains information relating to the state
# of the checkout. 
#
# Otherwise, if the current directory is not part of a fossil checkout, it
# is set to a fairly standard bash prompt containing the host name, user
# name and current directory.
#
function set_prompt() {
  get_fossil_data
  if [ -n "$fossil_info_project_name" ] ; then 
    project=$fossil_info_project_name
    checkout=`echo $fossil_info_checkout | sed 's/^\(........\).*/\1/'`
    date=`echo $fossil_info_checkout | sed 's/^[^ ]* *..//' | sed 's/:[^:]*$//'`
    tags=$fossil_info_tags
    local_root=`echo $fossil_info_local_root | sed 's/\/$//'`
    local=`pwd | sed "s*${local_root}**" | sed "s/^$/\//"`













|




|











|







|







1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45

#-------------------------------------------------------------------------
#   get_fossil_data()
#
# If the current directory is part of a fossil checkout, then populate
# a series of global variables based on the current state of that
# checkout. Variables are populated based on the output of the [fossil info]
# command.
#
# If the current directory is not part of a fossil checkout, set global
# variable $fossil_info_project_name to an empty string and return.
#
function get_fossil_data() {
  fossil_info_project_name=""
  eval `get_fossil_data2`
}
function get_fossil_data2() {
  fossil info 2> /dev/null | sed 's/"//g'|grep "^[^ ]*:" | while read LINE ; do
    local field=`echo $LINE | sed 's/:.*$//' | sed 's/-/_/'`
    local value=`echo $LINE | sed 's/^[^ ]*: *//'`
    echo fossil_info_${field}=\"${value}\"
  done
}

#-------------------------------------------------------------------------
#   set_prompt()
#
# Set the PS1 variable. If the current directory is part of a fossil
# checkout then the prompt contains information relating to the state
# of the checkout.
#
# Otherwise, if the current directory is not part of a fossil checkout, it
# is set to a fairly standard bash prompt containing the host name, user
# name and current directory.
#
function set_prompt() {
  get_fossil_data
  if [ -n "$fossil_info_project_name" ] ; then
    project=$fossil_info_project_name
    checkout=`echo $fossil_info_checkout | sed 's/^\(........\).*/\1/'`
    date=`echo $fossil_info_checkout | sed 's/^[^ ]* *..//' | sed 's/:[^:]*$//'`
    tags=$fossil_info_tags
    local_root=`echo $fossil_info_local_root | sed 's/\/$//'`
    local=`pwd | sed "s*${local_root}**" | sed "s/^$/\//"`

Changes to www/makefile.wiki.
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83

The src/ subdirectory also contains documentation about the
makeheaders preprocessor program:

  11.  [../src/makeheaders.html | makeheaders.html]

Click on the link to read this documentation.  In addition there is
a [http://www.tcl.tk/ | Tcl] script used to build the various makefiles:

  12.  makemake.tcl

Running this Tcl script will automatically regenerate all makefiles.
In order to add a new source file to the Fossil implementation, simply
edit makemake.tcl to add the new filename, then rerun the script, and
all of the makefiles for all targets will be rebuild.







|







69
70
71
72
73
74
75
76
77
78
79
80
81
82
83

The src/ subdirectory also contains documentation about the
makeheaders preprocessor program:

  11.  [../src/makeheaders.html | makeheaders.html]

Click on the link to read this documentation.  In addition there is
a [http://www.tcl-lang.org/ | Tcl] script used to build the various makefiles:

  12.  makemake.tcl

Running this Tcl script will automatically regenerate all makefiles.
In order to add a new source file to the Fossil implementation, simply
edit makemake.tcl to add the new filename, then rerun the script, and
all of the makefiles for all targets will be rebuild.
Changes to www/mkdownload.tcl.
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
  puts $out "<center><b><a href=\"$hr\">Version $vers</a></b></center>"
  puts $out "</td></tr>"
  puts $out "<tr>"

  foreach {prefix suffix img desc} {
    fossil-linux-x86 zip linux.gif {Linux 3.x x86}
    fossil-macosx-x86 zip mac.gif {Mac 10.x x86}
    fossil-openbsd-x86 zip openbsd.gif {OpenBSD 4.x x86}
    fossil-w32 zip win32.gif {Windows}
    fossil-src tar.gz src.gif {Source Tarball}
  } {
    set filename download/$prefix-$vers.$suffix
    if {[file exists $filename]} {
      set size [file size $filename]
      set units bytes







|







68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
  puts $out "<center><b><a href=\"$hr\">Version $vers</a></b></center>"
  puts $out "</td></tr>"
  puts $out "<tr>"

  foreach {prefix suffix img desc} {
    fossil-linux-x86 zip linux.gif {Linux 3.x x86}
    fossil-macosx-x86 zip mac.gif {Mac 10.x x86}
    fossil-openbsd-x86 zip openbsd.gif {OpenBSD 5.x x86}
    fossil-w32 zip win32.gif {Windows}
    fossil-src tar.gz src.gif {Source Tarball}
  } {
    set filename download/$prefix-$vers.$suffix
    if {[file exists $filename]} {
      set size [file size $filename]
      set units bytes
Changes to www/quotes.wiki.
75
76
77
78
79
80
81




82
83
84
85
86
87
88
89
90
91
92
93
94
<li>If programmers _really_ wanted to help scientists, they'd build a version control
system that was more usable than Git.

<blockquote>
<i>Tweet by Greg Wilson @gvwilson on 2015-02-22 17:47</i>
</blockquote>





</ol>

<h2>On The Usability Of Fossil:</h2>

<ol>
<li value=10>
Fossil mesmerizes me with simplicity especially after I struggled to
get a bug-tracking system to work with mercurial.

<blockquote>
<i>rawjeev at [http://stackoverflow.com/questions/156322/what-do-people-think-of-the-fossil-dvcs]</i>
</blockquote>








>
>
>
>





|







75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
<li>If programmers _really_ wanted to help scientists, they'd build a version control
system that was more usable than Git.

<blockquote>
<i>Tweet by Greg Wilson @gvwilson on 2015-02-22 17:47</i>
</blockquote>

<li><img src='xkcd-git.gif' align='top'>

<blockquote><i>Randall Munroe.  [http://xkcd.com/1597/]</i></blockquote>

</ol>

<h2>On The Usability Of Fossil:</h2>

<ol>
<li value=11>
Fossil mesmerizes me with simplicity especially after I struggled to
get a bug-tracking system to work with mercurial.

<blockquote>
<i>rawjeev at [http://stackoverflow.com/questions/156322/what-do-people-think-of-the-fossil-dvcs]</i>
</blockquote>

120
121
122
123
124
125
126
127
128
129
130
131
132
133
134

</ol>


<h2>On Git Versus Fossil</h2>

<ol>
<li value=14>
Just want to say thanks for fossil making my life easier.... 
Also <nowiki>[for]</nowiki> not having a misanthropic command line interface.

<blockquote>
<i>Joshua Paine at [http://www.mail-archive.com/fossil-users@lists.fossil-scm.org/msg02736.html]</i>
</blockquote>








|







124
125
126
127
128
129
130
131
132
133
134
135
136
137
138

</ol>


<h2>On Git Versus Fossil</h2>

<ol>
<li value=15>
Just want to say thanks for fossil making my life easier.... 
Also <nowiki>[for]</nowiki> not having a misanthropic command line interface.

<blockquote>
<i>Joshua Paine at [http://www.mail-archive.com/fossil-users@lists.fossil-scm.org/msg02736.html]</i>
</blockquote>

Changes to www/server.wiki.
177
178
179
180
181
182
183




184
185
186
187
188
189
190
must be readable by the process which executes the CGI.</li>
<li>ALL directories leading to the CGI script must also be readable and the CGI
script itself must be executable for the user under which it will run (which often differs
from the one running the web server - consult your site's documentation or administrator).</li>
<li>The repository file AND the directory containing it must be writable by the same account
which executes the Fossil binary (again, this might differ from the WWW user). The directory
needs to be writable so that sqlite can write its journal files.</li>




</ul>
</p>

<p>
Once the script is set up correctly, and assuming your server is also set
correctly, you should be able to access your repository with a URL like:
<b>http://mydomain.org/cgi-bin/repo</b> (assuming the "repo" script is







>
>
>
>







177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
must be readable by the process which executes the CGI.</li>
<li>ALL directories leading to the CGI script must also be readable and the CGI
script itself must be executable for the user under which it will run (which often differs
from the one running the web server - consult your site's documentation or administrator).</li>
<li>The repository file AND the directory containing it must be writable by the same account
which executes the Fossil binary (again, this might differ from the WWW user). The directory
needs to be writable so that sqlite can write its journal files.</li>
<li>Fossil must be able to create temporary files, the default directory 
for which depends on the OS.  When the CGI process is operating within       
a chroot, ensure that this directory exists and is readable/writeable
by the user who executes the Fossil binary.</li>
</ul>
</p>

<p>
Once the script is set up correctly, and assuming your server is also set
correctly, you should be able to access your repository with a URL like:
<b>http://mydomain.org/cgi-bin/repo</b> (assuming the "repo" script is
Changes to www/settings.wiki.
40
41
42
43
44
45
46
47

48
49
50
51
52
53
54
55
<tt>manifest</tt>. The most important is <tt>ignore-glob</tt> which
specifies which files should be ignored when looking for unmanaged files
with the <tt>extras</tt> command.

Because these options can change over time, and the inconvenience of
replicating changes, these settings are "versionable". As well as being
able to be set using the <tt>settings</tt> command or the web interface,
you can created versioned files in the <tt>.fossil-settings</tt>

directory named with the setting name. The contents of the file is the
value of the setting, and these files are checked in, committed, merged,
and so on, as with any other file.

Where a setting is a list of values, such as <tt>ignore-glob</tt>, you
can use a newline as a separator as well as a comma.

For example, to set the list of ignored files, create a







|
>
|







40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
<tt>manifest</tt>. The most important is <tt>ignore-glob</tt> which
specifies which files should be ignored when looking for unmanaged files
with the <tt>extras</tt> command.

Because these options can change over time, and the inconvenience of
replicating changes, these settings are "versionable". As well as being
able to be set using the <tt>settings</tt> command or the web interface,
you can create versioned files in the <tt>.fossil-settings</tt>
subdirectory of the check-out root, named with the setting name.
The contents of the file is the
value of the setting, and these files are checked in, committed, merged,
and so on, as with any other file.

Where a setting is a list of values, such as <tt>ignore-glob</tt>, you
can use a newline as a separator as well as a comma.

For example, to set the list of ignored files, create a
Changes to www/stats.wiki.
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
<td>3.4&nbsp;GB
<td>45.5&nbsp;MB
<td>73:1
<td>29.9&nbsp;MB
</tr>

<tr align="center">
<td>[http://core.tcl.tk/tcl/timeline | TCL]
<td>139662
<td>18125
<td>6183&nbsp;days<br>16.93&nbsp;years
<td>6.6&nbsp;GB
<td>192.6&nbsp;MB
<td>34:1
<td>117.1&nbsp;MB







|







30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
<td>3.4&nbsp;GB
<td>45.5&nbsp;MB
<td>73:1
<td>29.9&nbsp;MB
</tr>

<tr align="center">
<td>[http://core.tcl-lang.org/tcl/timeline | TCL]
<td>139662
<td>18125
<td>6183&nbsp;days<br>16.93&nbsp;years
<td>6.6&nbsp;GB
<td>192.6&nbsp;MB
<td>34:1
<td>117.1&nbsp;MB
Changes to www/th1.md.
111
112
113
114
115
116
117

118
119
120
121
122
123
124
125
126
127
128
129
130
131
132

133

134
135
136
137
138
139
140
141
142
143

144
145
146
147
148
149
150
  *  string range STRING FIRST LAST
  *  string repeat STRING COUNT
  *  unset VARNAME
  *  uplevel ?LEVEL? SCRIPT
  *  upvar ?FRAME? OTHERVAR MYVAR ?OTHERVAR MYVAR?

All of the above commands works as in the original Tcl.  Refer to the

Tcl documentation for details.

TH1 Extended Commands
---------------------

There are many new commands added to TH1 and used to access the special
features of Fossil.  The following is a summary of the extended commands:

  *  anoncap
  *  anycap
  *  artifact
  *  checkout
  *  combobox
  *  date
  *  decorate

  *  enable_output

  *  getParameter
  *  glob_match
  *  globalState
  *  hascap
  *  hasfeature
  *  html
  *  htmlize
  *  http
  *  httpize
  *  linecount

  *  puts
  *  query
  *  randhex
  *  regexp
  *  reinitialize
  *  render
  *  repository







>
|














>

>










>







111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
  *  string range STRING FIRST LAST
  *  string repeat STRING COUNT
  *  unset VARNAME
  *  uplevel ?LEVEL? SCRIPT
  *  upvar ?FRAME? OTHERVAR MYVAR ?OTHERVAR MYVAR?

All of the above commands works as in the original Tcl.  Refer to the
<a href="https://www.tcl-lang.org/man/tcl/contents.htm">Tcl documentation</a>
for details.

TH1 Extended Commands
---------------------

There are many new commands added to TH1 and used to access the special
features of Fossil.  The following is a summary of the extended commands:

  *  anoncap
  *  anycap
  *  artifact
  *  checkout
  *  combobox
  *  date
  *  decorate
  *  dir
  *  enable_output
  *  encode64
  *  getParameter
  *  glob_match
  *  globalState
  *  hascap
  *  hasfeature
  *  html
  *  htmlize
  *  http
  *  httpize
  *  linecount
  *  markdown
  *  puts
  *  query
  *  randhex
  *  regexp
  *  reinitialize
  *  render
  *  repository
230
231
232
233
234
235
236












237
238
239
240
241
242
243
244







245
246
247
248
249
250
251
-------------------------------------------

  *  decorate STRING

Renders STRING as wiki content; however, only links are handled.  No
other markup is processed.













<a name="enable_output"></a>TH1 enable_output Command
-----------------------------------------------------

  *  enable_output BOOLEAN

Enable or disable sending output when the combobox, puts, or wiki
commands are used.








<a name="getParameter"></a>TH1 getParameter Command
---------------------------------------------------

  *  getParameter NAME ?DEFAULT?

Returns the value of the specified query parameter or the specified
default value when there is no matching query parameter.







>
>
>
>
>
>
>
>
>
>
>
>








>
>
>
>
>
>
>







234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
-------------------------------------------

  *  decorate STRING

Renders STRING as wiki content; however, only links are handled.  No
other markup is processed.

<a name="dir"></a>TH1 dir Command
-------------------------------------------

  * dir CHECKIN ?GLOB? ?DETAILS?

Returns a list containing all files in CHECKIN. If GLOB is given only
the files matching the pattern GLOB within CHECKIN will be returned.
If DETAILS is non-zero, the result will be a list-of-lists, with each
element containing at least three elements: the file name, the file
size (in bytes), and the file last modification time (relative to the
time zone configured for the repository).

<a name="enable_output"></a>TH1 enable_output Command
-----------------------------------------------------

  *  enable_output BOOLEAN

Enable or disable sending output when the combobox, puts, or wiki
commands are used.

<a name="encode64"></a>TH1 encode64 Command
-------------------------------------------

  *  encode64 STRING

Encode the specified string using Base64 and return the result.

<a name="getParameter"></a>TH1 getParameter Command
---------------------------------------------------

  *  getParameter NAME ?DEFAULT?

Returns the value of the specified query parameter or the specified
default value when there is no matching query parameter.
294
295
296
297
298
299
300

301
302
303
304
305
306
307
308
309
310



311
312
313
314
315
316
317
  *  hasfeature STRING

Returns true if the binary has the given compile-time feature enabled.
The possible features are:

  1. **ssl** -- _Support for the HTTPS transport._
  1. **legacyMvRm** -- _Support for legacy mv/rm command behavior._

  1. **th1Docs** -- _Support for TH1 in embedded documentation._
  1. **th1Hooks** -- _Support for TH1 command and web page hooks._
  1. **tcl** -- _Support for Tcl integration._
  1. **useTclStubs** -- _Tcl stubs enabled in the Tcl headers._
  1. **tclStubs** -- _Uses Tcl stubs (i.e. linking with stubs library)._
  1. **tclPrivateStubs** -- _Uses Tcl private stubs (i.e. header-only)._
  1. **json** -- _Support for the JSON APIs._
  1. **markdown** -- _Support for Markdown documentation format._
  1. **unicodeCmdLine** -- _The command line arguments are Unicode._
  1. **dynamicBuild** -- _Dynamically linked to libraries._




<a name="html"></a>TH1 html Command
-----------------------------------

  *  html STRING

Outputs the STRING escaped for HTML.







>










>
>
>







317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
  *  hasfeature STRING

Returns true if the binary has the given compile-time feature enabled.
The possible features are:

  1. **ssl** -- _Support for the HTTPS transport._
  1. **legacyMvRm** -- _Support for legacy mv/rm command behavior._
  1. **execRelPaths** -- _Use relative paths with external diff/gdiff._
  1. **th1Docs** -- _Support for TH1 in embedded documentation._
  1. **th1Hooks** -- _Support for TH1 command and web page hooks._
  1. **tcl** -- _Support for Tcl integration._
  1. **useTclStubs** -- _Tcl stubs enabled in the Tcl headers._
  1. **tclStubs** -- _Uses Tcl stubs (i.e. linking with stubs library)._
  1. **tclPrivateStubs** -- _Uses Tcl private stubs (i.e. header-only)._
  1. **json** -- _Support for the JSON APIs._
  1. **markdown** -- _Support for Markdown documentation format._
  1. **unicodeCmdLine** -- _The command line arguments are Unicode._
  1. **dynamicBuild** -- _Dynamically linked to libraries._

Specifying an unknown feature will return a value of false, it will not
raise a script error.

<a name="html"></a>TH1 html Command
-----------------------------------

  *  html STRING

Outputs the STRING escaped for HTML.
348
349
350
351
352
353
354









355
356
357
358
359
360
361
<a name="linecount"></a>TH1 linecount Command
---------------------------------------------

  *  linecount STRING MAX MIN

Returns one more than the number of \n characters in STRING.  But
never returns less than MIN or more than MAX.










<a name="puts"></a>TH1 puts Command
-----------------------------------

  *  puts STRING

Outputs the STRING unchanged.







>
>
>
>
>
>
>
>
>







375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
<a name="linecount"></a>TH1 linecount Command
---------------------------------------------

  *  linecount STRING MAX MIN

Returns one more than the number of \n characters in STRING.  But
never returns less than MIN or more than MAX.

<a name="markdown"></a>TH1 markdown Command
---------------------------------------------

  *  markdown STRING

Renders the input string as markdown.  The result is a two-element list.
The first element contains the body, rendered as HTML.  The second element
is the text-only title string.

<a name="puts"></a>TH1 puts Command
-----------------------------------

  *  puts STRING

Outputs the STRING unchanged.
Changes to www/uitest.html.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
<html>
<head>
<title>Fossil UI Test</title>
</head>
<body>
<script>
  var aTest = [
///////////////////////////////////////////////////////////////////////////
///  Add pages to be tested below:
//////////////////////////////////////////////////////////////////////////
{
 url: "timeline",
 desc: 
   "Simple timeline of most recent check-ins. Verify that all submenus work."
},
{
 url: "timeline?n=125",
 desc: 
   "Timeline with 125 entries.  Verify that submenus preserve the entry count."
},
{
 url: "wiki",
 desc: 
   "The wiki homepage"
}
//////////////////////////////////////////////////////////////////////////////
///  End of testing data
/////////////////////////////////////////////////////////////////////////////
  ];
  var iTest = 0;












|




|




|







1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
<html>
<head>
<title>Fossil UI Test</title>
</head>
<body>
<script>
  var aTest = [
///////////////////////////////////////////////////////////////////////////
///  Add pages to be tested below:
//////////////////////////////////////////////////////////////////////////
{
 url: "timeline",
 desc:
   "Simple timeline of most recent check-ins. Verify that all submenus work."
},
{
 url: "timeline?n=125",
 desc:
   "Timeline with 125 entries.  Verify that submenus preserve the entry count."
},
{
 url: "wiki",
 desc:
   "The wiki homepage"
}
//////////////////////////////////////////////////////////////////////////////
///  End of testing data
/////////////////////////////////////////////////////////////////////////////
  ];
  var iTest = 0;
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
  xprev.hidden = 1;
  xnext.hidden = 1;
  xpass.hidden = 1;
  xstart.hidden = 0;
  xstart.href = baseURI + aTest[0].url;
  function startTest(){
    setTimeout(loadPage,1);
  } 
  function prevTest(){
    if( iTest<=0 ) return false;
    iTest--;
    setTimeout(loadPage,1);
  }
  function nextTest(){
    if( iTest+1>=nTest ) return false;







|







96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
  xprev.hidden = 1;
  xnext.hidden = 1;
  xpass.hidden = 1;
  xstart.hidden = 0;
  xstart.href = baseURI + aTest[0].url;
  function startTest(){
    setTimeout(loadPage,1);
  }
  function prevTest(){
    if( iTest<=0 ) return false;
    iTest--;
    setTimeout(loadPage,1);
  }
  function nextTest(){
    if( iTest+1>=nTest ) return false;
Changes to www/webpage-ex.md.
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68

  *  <a target='_blank' class='exbtn'
     href='../../../timeline?n=100&y=ci&ubg'>Example</a>
     100 most recent check-ins color coded by committer.

  *  <a target='_blank' class='exbtn'
     href='../../../timeline?from=version-1.27&to=version-1.28'>Example</a>
     All check-ins on the most direct path from 
     version-1.27 to version-1.28

     (Hint:  In any graph above, click the square node boxes 
     for two check-ins or files to see a diff.)

  *  <a target='_blank' class='exbtn'
     href='../../../tree?ci=daff9d20621&type=tree'>Example</a>
     All files for a particular check-in (daff9d20621480)

  *  <a target='_blank' class='exbtn'







|


|







51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68

  *  <a target='_blank' class='exbtn'
     href='../../../timeline?n=100&y=ci&ubg'>Example</a>
     100 most recent check-ins color coded by committer.

  *  <a target='_blank' class='exbtn'
     href='../../../timeline?from=version-1.27&to=version-1.28'>Example</a>
     All check-ins on the most direct path from
     version-1.27 to version-1.28

     (Hint:  In any graph above, click the square node boxes
     for two check-ins or files to see a diff.)

  *  <a target='_blank' class='exbtn'
     href='../../../tree?ci=daff9d20621&type=tree'>Example</a>
     All files for a particular check-in (daff9d20621480)

  *  <a target='_blank' class='exbtn'
86
87
88
89
90
91
92
93
94
95
96
97
98
     href='../../../reports?view=byfile'>Example</a>
     Number of check-ins for each source file.
     (Click on column headers to sort.)

  *  <a target='_blank' class='exbtn'
     href='../../../blame?checkin=5260fbf63287&filename=src/rss.c&limit=-1'>
       Example</a>
     Most recent change to each line of a particular source file in a 
     particular check-in.

  *  <a target='_blank' class='exbtn'
     href='../../../taglist'>Example</a>
     List of tags on check-ins.







|





86
87
88
89
90
91
92
93
94
95
96
97
98
     href='../../../reports?view=byfile'>Example</a>
     Number of check-ins for each source file.
     (Click on column headers to sort.)

  *  <a target='_blank' class='exbtn'
     href='../../../blame?checkin=5260fbf63287&filename=src/rss.c&limit=-1'>
       Example</a>
     Most recent change to each line of a particular source file in a
     particular check-in.

  *  <a target='_blank' class='exbtn'
     href='../../../taglist'>Example</a>
     List of tags on check-ins.
Changes to www/webui.wiki.
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
from within an open check-out, you can omit the repository name:

  <b>fossil ui</b>

The latter case is a very useful short-cut when you are working on a
Fossil project and you want to quickly do some work with the web interface.
Notice that Fossil automatically finds an unused TCP port to run the
server own and automatically points your web browser to the correct
URL.  So there is never any fumbling around trying to find an open
port or to type arcane strings into your browser URL entry box.
The interface just pops right up, ready to run.

The Fossil web interface is also very easy to setup and run on a
network server, as either a CGI program or from inetd, or as an
SCGI server.  Details on how







|







59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
from within an open check-out, you can omit the repository name:

  <b>fossil ui</b>

The latter case is a very useful short-cut when you are working on a
Fossil project and you want to quickly do some work with the web interface.
Notice that Fossil automatically finds an unused TCP port to run the
server on and automatically points your web browser to the correct
URL.  So there is never any fumbling around trying to find an open
port or to type arcane strings into your browser URL entry box.
The interface just pops right up, ready to run.

The Fossil web interface is also very easy to setup and run on a
network server, as either a CGI program or from inetd, or as an
SCGI server.  Details on how
Added www/xkcd-git.gif.

cannot compute difference between binary files