Many hyperlinks are disabled.
Use anonymous login
to enable hyperlinks.
Overview
Comment: | Merge trunk |
---|---|
Downloads: | Tarball | ZIP archive |
Timelines: | family | ancestors | descendants | both | andygoth-crlf |
Files: | files | file ages | folders |
SHA1: |
7ea74acf55497aafb096ddf45b966e92 |
User & Date: | andygoth 2016-11-07 00:50:10.445 |
Context
2016-11-07
| ||
00:53 | Update for a new instance of crnl-glob that was added since this branch's baseline ... (Closed-Leaf check-in: 46fd89ea user: andygoth tags: andygoth-crlf) | |
00:50 | Merge trunk ... (check-in: 7ea74acf user: andygoth tags: andygoth-crlf) | |
00:48 | Ensure deleted/missing files are not processed as other types of files when C_DELETED and C_MISSING are not specified ... (check-in: e9a43ae0 user: andygoth tags: trunk) | |
2016-05-23
| ||
15:34 | Rename crnl-glob to crlf-glob, retaining support for crnl-glob as a compatibility alias. Change terminology from NL to LF throughout, excepting cases where NL means newline and not line feed. Also don't change linenoise.c which is third-party code. ... (check-in: 2bc3cfeb user: andygoth tags: andygoth-crlf) | |
Changes
Changes to .fossil-settings/ignore-glob.
1 2 3 4 5 6 | compat/openssl* compat/tcl* fossil fossil.exe win/fossil.exe *sqlite3-see.* | > | 1 2 3 4 5 6 7 | compat/openssl* compat/tcl* fossil fossil.exe win/fossil.exe *shell-see.* *sqlite3-see.* |
Changes to Dockerfile.
1 2 3 | ### # Dockerfile for Fossil ### | | | 1 2 3 4 5 6 7 8 9 10 11 | ### # Dockerfile for Fossil ### FROM fedora:24 ### Now install some additional parts we will need for the build RUN dnf update -y && dnf install -y gcc make zlib-devel openssl-devel tar && dnf clean all && groupadd -r fossil -g 433 && useradd -u 431 -r -g fossil -d /opt/fossil -s /sbin/nologin -c "Fossil user" fossil ### If you want to build "trunk", change the next line accordingly. ENV FOSSIL_INSTALL_VERSION release |
︙ | ︙ |
Changes to Makefile.in.
︙ | ︙ | |||
35 36 37 38 39 40 41 42 43 44 45 46 47 48 | #### Tcl shell for use in running the fossil testsuite. If you do not # care about testing the end result, this can be blank. # TCLSH = tclsh LIB = @LDFLAGS@ @EXTRA_LDFLAGS@ @LIBS@ TCCFLAGS = @EXTRA_CFLAGS@ @CPPFLAGS@ @CFLAGS@ -DHAVE_AUTOCONFIG_H -D_HAVE_SQLITE_CONFIG_H INSTALLDIR = $(DESTDIR)@prefix@/bin USE_SYSTEM_SQLITE = @USE_SYSTEM_SQLITE@ USE_LINENOISE = @USE_LINENOISE@ USE_SEE = @USE_SEE@ FOSSIL_ENABLE_MINIZ = @FOSSIL_ENABLE_MINIZ@ | > | 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 | #### Tcl shell for use in running the fossil testsuite. If you do not # care about testing the end result, this can be blank. # TCLSH = tclsh LIB = @LDFLAGS@ @EXTRA_LDFLAGS@ @LIBS@ BCCFLAGS = @CPPFLAGS@ @CFLAGS@ TCCFLAGS = @EXTRA_CFLAGS@ @CPPFLAGS@ @CFLAGS@ -DHAVE_AUTOCONFIG_H -D_HAVE_SQLITE_CONFIG_H INSTALLDIR = $(DESTDIR)@prefix@/bin USE_SYSTEM_SQLITE = @USE_SYSTEM_SQLITE@ USE_LINENOISE = @USE_LINENOISE@ USE_SEE = @USE_SEE@ FOSSIL_ENABLE_MINIZ = @FOSSIL_ENABLE_MINIZ@ |
︙ | ︙ |
Changes to VERSION.
|
| | | 1 | 1.37 |
Changes to ajax/js/fossil-ajaj.js.
1 2 3 4 5 6 7 8 9 10 11 | /** This file contains a WhAjaj extension for use with Fossil/JSON. Author: Stephan Beal (sgbeal@googlemail.com) License: Public Domain */ /** Constructor for a new Fossil AJAJ client. ajajOpt may be an optional object suitable for passing to the WhAjaj.Connector() constructor. | | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | /** This file contains a WhAjaj extension for use with Fossil/JSON. Author: Stephan Beal (sgbeal@googlemail.com) License: Public Domain */ /** Constructor for a new Fossil AJAJ client. ajajOpt may be an optional object suitable for passing to the WhAjaj.Connector() constructor. On returning, this.ajaj is-a WhAjaj.Connector instance which can be used to send requests to the back-end (though the convenience functions of this class are the preferred way to do it). Clients are encouraged to use FossilAjaj.sendCommand() (and friends) instead of the underlying WhAjaj.Connector API, since this class' API contains Fossil-specific request-calling handling (e.g. of authentication info) whereas WhAjaj is more generic. */ function FossilAjaj(ajajOpt) { this.ajaj = new WhAjaj.Connector(ajajOpt); return this; |
︙ | ︙ | |||
36 37 38 39 40 41 42 | }; /** Sends a command to the fossil back-end. Command should be the path part of the URL, e.g. /json/stat, payload is a request-specific value type (may often be null/undefined). ajajOpt is an optional object holding WhAjaj.sendRequest()-compatible options. | | | 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 | }; /** Sends a command to the fossil back-end. Command should be the path part of the URL, e.g. /json/stat, payload is a request-specific value type (may often be null/undefined). ajajOpt is an optional object holding WhAjaj.sendRequest()-compatible options. This function constructs a Fossil/JSON request envelope based on the given arguments and adds this.auth.authToken and a requestId to it. */ FossilAjaj.prototype.sendCommand = function(command, payload, ajajOpt) { var req; ajajOpt = ajajOpt || {}; |
︙ | ︙ | |||
61 62 63 64 65 66 67 | if(command) ajajOpt.url = this.ajaj.derivedOption('url',ajajOpt) + command; this.ajaj.sendRequest(req,ajajOpt); }; /** Sends a login request to the back-end. | | | | | 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | if(command) ajajOpt.url = this.ajaj.derivedOption('url',ajajOpt) + command; this.ajaj.sendRequest(req,ajajOpt); }; /** Sends a login request to the back-end. ajajOpt is an optional configuration object suitable for passing to sendCommand(). After the response returns, this.auth will be set to the response payload. If name === 'anonymous' (the default if none is passed in) then this function ignores the pw argument and must make two requests - the first one gets the captcha code and the second one submits it. ajajOpt.onResponse() (if set) is only called for the actual login response (the 2nd one), as opposed to being called for both requests. However, this.ajaj.callbacks.onResponse() _is_ called for both (because it happens at a lower level). If this object has an onLogin() function it is called (with no arguments) before the onResponse() handler of the login is called (that is the 2nd request for anonymous logins) and any exceptions it throws are ignored. */ FossilAjaj.prototype.login = function(name,pw,ajajOpt) { |
︙ | ︙ | |||
133 134 135 136 137 138 139 | } else doLogin(); }; /** Logs out of fossil, invaliding this login token. | | | | 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 | } else doLogin(); }; /** Logs out of fossil, invaliding this login token. ajajOpt is an optional configuration object suitable for passing to sendCommand(). If this object has an onLogout() function it is called (with no arguments) before the onResponse() handler is called. IFF the response succeeds then this.auth is unset. */ FossilAjaj.prototype.logout = function(ajajOpt) { var self = this; ajajOpt = this.ajaj.normalizeAjaxParameters( ajajOpt || {} ); |
︙ | ︙ | |||
161 162 163 164 165 166 167 | }; this.sendCommand('/json/logout', undefined, ajajOpt ); }; /** Sends a HAI request to the server. /json/HAI is an alias /json/version. | | | 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 | }; this.sendCommand('/json/logout', undefined, ajajOpt ); }; /** Sends a HAI request to the server. /json/HAI is an alias /json/version. ajajOpt is an optional configuration object suitable for passing to sendCommand(). */ FossilAjaj.prototype.HAI = function(ajajOpt) { this.sendCommand('/json/HAI', undefined, ajajOpt); }; |
︙ | ︙ | |||
224 225 226 227 228 229 230 | mode, feeds it the request envelope, and returns the response envelope via the same mechanisms defined for the HTTP-based implementations. The interface is otherwise compatible with the "normal" FossilAjaj.sendCommand() front-end (it is, however, fossil-specific, and not back-end agnostic like the WhAjaj.sendImpl() interface intends). | | | | | 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 | mode, feeds it the request envelope, and returns the response envelope via the same mechanisms defined for the HTTP-based implementations. The interface is otherwise compatible with the "normal" FossilAjaj.sendCommand() front-end (it is, however, fossil-specific, and not back-end agnostic like the WhAjaj.sendImpl() interface intends). */ FossilAjaj.rhinoLocalBinarySendImpl = function(request,args){ var self = this; request = request || {}; if(!args.fossilBinary){ throw new Error("fossilBinary is not set on AJAX options!"); } var url = args.url.split('?')[0].split(/\/+/); if(url.length>1){ // 3x shift(): protocol, host, 'json' part of path request.command = (url.shift(),url.shift(),url.shift(), url.join('/')); } delete args.url; //print("rhinoLocalBinarySendImpl SENDING: "+WhAjaj.stringify(request)); var json; try{ var pargs = [args.fossilBinary, 'json', '--json-input', '-']; var p = java.lang.Runtime.getRuntime().exec(pargs); var outs = p.getOutputStream(); var osr = new java.io.OutputStreamWriter(outs); var osb = new java.io.BufferedWriter(osr); json = JSON.stringify(request); osb.write(json,0, json.length); osb.close(); var ins = p.getInputStream(); var isr = new java.io.InputStreamReader(ins); var br = new java.io.BufferedReader(isr); var line; |
︙ | ︙ |
Changes to ajax/js/whajaj.js.
︙ | ︙ | |||
8 9 10 11 12 13 14 | All functionality is part of a class named WhAjaj, and that class acts as namespace for this framework. Author: Stephan Beal (http://wanderinghorse.net/home/stephan/) License: Public Domain | | | | | | | | | | 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 | All functionality is part of a class named WhAjaj, and that class acts as namespace for this framework. Author: Stephan Beal (http://wanderinghorse.net/home/stephan/) License: Public Domain This framework is directly derived from code originally found in http://code.google.com/p/jsonmessage, and later in http://whiki.wanderinghorse.net, where it contained quite a bit of application-specific logic. It was eventually (the 3rd time i needed it) split off into its own library to simplify inclusion into my many mini-projects. */ /** The WhAjaj function is primarily a namespace, and not intended to called or instantiated via the 'new' operator. */ function WhAjaj() { } /** Returns a millisecond Unix Epoch timestamp. */ WhAjaj.msTimestamp = function() { return (new Date()).getTime(); }; /** Returns a Unix Epoch timestamp (in seconds) in integer format. Reminder to self: (1.1 %1.2) evaluates to a floating-point value in JS, and thus this implementation is less than optimal. */ WhAjaj.unixTimestamp = function() { var ts = (new Date()).getTime(); return parseInt( ""+((ts / 1000) % ts) ); }; |
︙ | ︙ | |||
86 87 88 89 90 91 92 | ) ; }; /** Parses window.location.search-style string into an object containing key/value pairs of URL arguments (already urldecoded). | | | | | | | 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 | ) ; }; /** Parses window.location.search-style string into an object containing key/value pairs of URL arguments (already urldecoded). If the str argument is not passed (arguments.length==0) then window.location.search.substring(1) is used by default. If neither str is passed in nor window exists then false is returned. On success it returns an Object containing the key/value pairs parsed from the string. Keys which have no value are treated has having the boolean true value. FIXME: for keys in the form "name[]", build an array of results, like PHP does. */ WhAjaj.processUrlArgs = function(str) { if( 0 === arguments.length ) { if( ('undefined' === typeof window) || !window.location || !window.location.search ) return false; else str = (''+window.location.search).substring(1); |
︙ | ︙ | |||
125 126 127 128 129 130 131 | }; /** A simple wrapper around JSON.stringify(), using my own personal preferred values for the 2nd and 3rd parameters. To globally set its indentation level, assign WhAjaj.stringify.indent to an integer value (0 for no intendation). | | | | | | | | | | | | | | | | | | | | 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 | }; /** A simple wrapper around JSON.stringify(), using my own personal preferred values for the 2nd and 3rd parameters. To globally set its indentation level, assign WhAjaj.stringify.indent to an integer value (0 for no intendation). This function is intended only for human-readable output, not generic over-the-wire JSON output (where JSON.stringify(val) will produce smaller results). */ WhAjaj.stringify = function(val) { if( ! arguments.callee.indent ) arguments.callee.indent = 4; return JSON.stringify(val,0,arguments.callee.indent); }; /** Each instance of this class holds state information for making AJAJ requests to a back-end system. While clients may use one "requester" object per connection attempt, for connections to the same back-end, using an instance configured for that back-end can simplify usage. This class is designed so that the actual connection-related details (i.e. _how_ it connects to the back-end) may be re-implemented to use a client's preferred connection mechanism (e.g. jQuery). The optional opt paramater may be an object with any (or all) of the properties documented for WhAjaj.Connector.options.ajax. Properties set here (or later via modification of the "options" property of this object) will be used in calls to WhAjaj.Connector.sendRequest(), and these override (normally) any options set in WhAjaj.Connector.options.ajax. Note that WhAjaj.Connector.sendRequest() _also_ takes an options object, and ones passed there will override, for purposes of that one request, any options passed in here or defined in WhAjaj.Connector.options.ajax. See WhAjaj.Connector.options.ajax and WhAjaj.Connector.prototype.sendRequest() for more details about the precedence of options. Sample usage: @code // Set up common connection-level options: var cgi = new WhAjaj.Connector({ url: '/cgi-bin/my.cgi', timeout:10000, onResponse(resp,req) { alert(JSON.stringify(resp,0.4)); }, onError(req,opt) { |
︙ | ︙ | |||
183 184 185 186 187 188 189 | onResponse(resp,req){ alert(WhAjaj.stringify(resp)); } }); @endcode For common request types, clients can add functions to this object which act as wrappers for backend-specific functionality. As a simple example: | | | | | | 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 | onResponse(resp,req){ alert(WhAjaj.stringify(resp)); } }); @endcode For common request types, clients can add functions to this object which act as wrappers for backend-specific functionality. As a simple example: @code cgi.login = function(name,pw,ajajOpt) { this.sendRequest( {command:"json/login", name:name, password:pw }, ajajOpt ); }; @endcode TODOs: - Caching of page-load requests, with a configurable lifetime. - Use-cases like the above login() function are a tiny bit problematic to implement when each request has a different URL path (i know this from the whiki and fossil implementations). This is partly a side-effect of design descisions made back in the very first days of this code's life. i need to go through and see where i can bend those conventions a bit (where it won't break my other apps unduly). |
︙ | ︙ | |||
228 229 230 231 232 233 234 | WhAjaj.Connector.options = { /** A (meaningless) prefix to apply to WhAjaj.Connector-generated request IDs. */ requestIdPrefix:'WhAjaj.Connector-', /** | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 | WhAjaj.Connector.options = { /** A (meaningless) prefix to apply to WhAjaj.Connector-generated request IDs. */ requestIdPrefix:'WhAjaj.Connector-', /** Default options for WhAjaj.Connector.sendRequest() connection parameters. This object holds only connection-related options and callbacks (all optional), and not options related to the required JSON structure of any given request. i.e. the page name used in a get-page request are not set here but are specified as part of the request object. These connection options are a "normalized form" of options often found in various AJAX libraries like jQuery, Prototype, dojo, etc. This approach allows us to swap out the real connection-related parts by writing a simple proxy which transforms our "normalized" form to the backend-specific form. For examples, see the various implementations stored in WhAjaj.Connector.sendImpls. The following callback options are, in practice, almost always set globally to some app-wide defaults: - onError() to report errors using a common mechanism. - beforeSend() to start a visual activity notification - afterSend() to disable the visual activity notification However, be aware that if any given WhAjaj.Connector instance is given its own before/afterSend callback then those will override these. Mixing shared/global and per-instance callbacks can potentially lead to confusing results if, e.g., the beforeSend() and afterSend() functions have side-effects but are not used with their proper before/after partner. TODO: rename this to 'ajaj' (the name is historical). The problem with renaming it is is that the word 'ajax' is pretty prevelant in the source tree, so i can't globally swap it out. */ ajax: { /** URL of the back-end server/CGI. */ url: '/some/path', /** Connection method. Some connection-related functions might override any client-defined setting. Must be one of 'GET' or 'POST'. For custom connection implementation, it may optionally be some implementation-specified value. Normally the API can derive this value automatically - if the request uses JSON data it is POSTed, else it is GETted. */ method:'GET', /** A hint whether to run the operation asynchronously or not. Not all concrete WhAjaj.Connector.sendImpl() implementations can support this. Interestingly, at least one popular AJAX toolkit does not document supporting _synchronous_ AJAX operations. All common browser-side implementations support async operation, but non-browser implementations might not. */ asynchronous:true, /** A HTTP authentication login name for the AJAX connection. Not all concrete WhAjaj.Connector.sendImpl() implementations can support this. */ loginName:undefined, /** An HTTP authentication login password for the AJAJ connection. Not all concrete WhAjaj.Connector.sendImpl() implementations can support this. */ loginPassword:undefined, /** A connection timeout, in milliseconds, for establishing an AJAJ connection. Not all concrete WhAjaj.Connector.sendImpl() implementations can support this. */ timeout:10000, /** If an AJAJ request receives JSON data from the back-end, that data is passed as a plain Object as the response parameter (exception: in jsonp mode it is passed a string (why???)). The initiating request object is passed as the second parameter, but clients can normally ignore it (only those which need a way to map specific requests to responses will need it). The 3rd parameter is the same as the 'this' object for the context of the callback, but is provided because the instance-level callbacks (set in (WhAjaj.Connector instance).callbacks, require it in some cases (because their 'this' is different!). Note that the response might contain error information which comes from the back-end. The difference between this error info and the info passed to the onError() callback is that this data indicates an application-level error, whereas onError() is used to report connection-level problems or when the backend produces non-JSON data (which, when not in jsonp mode, is unexpected and is as fatal to the request as a connection error). */ onResponse: function(response, request, opt){}, /** If an AJAX request fails to establish a connection or it receives non-JSON data from the back-end, this function is called (e.g. timeout error or host name not resolvable). It is passed the originating request and the "normalized" connection parameters used for that request. The connectOpt object "should" (or "might") have an "errorMessage" property which describes the nature of the problem. Clients will almost always want to replace the default implementation with something which integrates into their application. */ onError: function(request, connectOpt) { alert('AJAJ request failed:\n' +'Connection information:\n' +JSON.stringify(connectOpt,0,4) ); }, /** Called before each connection attempt is made. Clients can use this to, e.g., enable a visual "network activity notification" for the user. It is passed the original request object and the normalized connection parameters for the request. If this function changes opt, those changes _are_ applied to the subsequent request. If this function throws, neither the onError() nor afterSend() callbacks are triggered and WhAjaj.Connector.sendImpl() propagates the exception back to the caller. */ beforeSend: function(request,opt){}, /** Called after an AJAJ connection attempt completes, regardless of success or failure. Passed the same parameters as beforeSend() (see that function for details). Here's an example of setting up a visual notification on ajax operations using jQuery (but it's also easy to do without jQuery as well): @code function startAjaxNotif(req,opt) { var me = arguments.callee; var c = ++me.ajaxCount; me.element.text( c + " pending AJAX operation(s)..." ); if( 1 == c ) me.element.stop().fadeIn(); } startAjaxNotif.ajaxCount = 0. startAjaxNotif.element = jQuery('#whikiAjaxNotification'); function endAjaxNotif() { var c = --startAjaxNotif.ajaxCount; startAjaxNotif.element.text( c+" pending AJAX operation(s)..." ); if( 0 == c ) startAjaxNotif.element.stop().fadeOut(); } @endcode Set the beforeSend/afterSend properties to those functions to enable the notifications by default. */ afterSend: function(request,opt){}, /** If jsonp is a string then the WhAjaj-internal response handling code ASSUMES that the response contains a JSONP-style construct and eval()s it after afterSend() but before onResponse(). In this case, onResponse() will get a string value for the response instead of a response object parsed from JSON. */ jsonp:undefined, /** Don't use yet. Planned future option. */ propagateExceptions:false } |
︙ | ︙ | |||
466 467 468 469 470 471 472 | else v = this.options[key]; if( undefined !== v ) return v; else v = WhAjaj.Connector.options.ajax[key]; return v; }; /** | | | | | | | | | | | | 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 | else v = this.options[key]; if( undefined !== v ) return v; else v = WhAjaj.Connector.options.ajax[key]; return v; }; /** Returns a unique string on each call containing a generic reandom request identifier string. This is not used by the core API but can be used by client code to generate unique IDs for each request (if needed). The exact format is unspecified and may change in the future. Request IDs can be used by clients to "match up" responses to specific requests if needed. In practice, however, they are seldom, if ever, needed. When passing several concurrent requests through the same response callback, it might be useful for some clients to be able to distinguish, possibly re-routing them through other handlers based on the originating request type. If this.options.requestIdPrefix or WhAjaj.Connector.options.requestIdPrefix is set then that text is prefixed to the returned string. */ WhAjaj.Connector.prototype.generateRequestId = function() { if( undefined === arguments.callee.sequence ) { |
︙ | ︙ | |||
510 511 512 513 514 515 516 | if( ! opt.hasOwnProperty(k) ) continue /* proactive Prototype kludge! */; this.options[k] = opt[k]; } return this.options; }; /** | | | | | | | | | | | | | | | | | | | | | | 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 | if( ! opt.hasOwnProperty(k) ) continue /* proactive Prototype kludge! */; this.options[k] = opt[k]; } return this.options; }; /** An internal helper object which holds several functions intended to simplify the creation of concrete communication channel implementations for WhAjaj.Connector.sendImpl(). These operations take care of some of the more error-prone parts of ensuring that onResponse(), onError(), etc. callbacks are called consistently using the same rules. */ WhAjaj.Connector.sendHelper = { /** opt is assumed to be a normalized set of WhAjaj.Connector.sendRequest() options. This function creates a url by concatenating opt.url and some form of opt.urlParam. If opt.urlParam is an object or string then it is appended to the url. An object is assumed to be a one-dimensional set of simple (urlencodable) key/value pairs, and not larger data structures. A string value is assumed to be a well-formed, urlencoded set of key/value pairs separated by '&' characters. The new/normalized URL is returned (opt is not modified). If opt.urlParam is not set then opt.url is returned (or an empty string if opt.url is itself a false value). TODO: if opt is-a Object and any key points to an array, build up a list of keys in the form "keyname[]". We could arguably encode sub-objects like "keyname[subkey]=...", but i don't know if that's conventions-compatible with other frameworks. */ normalizeURL: function(opt) { var u = opt.url || ''; if( opt.urlParam ) { var addQ = (u.indexOf('?') >= 0) ? false : true; var addA = addQ ? false : ((u.indexOf('&')>=0) ? true : false); |
︙ | ︙ | |||
562 563 564 565 566 567 568 | tail = opt.urlParam; } u = u + (addQ ? '?' : '') + (addA ? '&' : '') + tail; } return u; }, /** | | | | | | | | | | | | | | | | | | | | | | | 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 | tail = opt.urlParam; } u = u + (addQ ? '?' : '') + (addA ? '&' : '') + tail; } return u; }, /** Should be called by WhAjaj.Connector.sendImpl() implementations after a response has come back. This function takes care of most of ensuring that framework-level conventions involving WhAjaj.Connector.options.ajax properties are followed. The request argument must be the original request passed to the sendImpl() function. It may legally be null for GET requests. The opt object should be the normalized AJAX options used for the connection. The resp argument may be either a plain Object or a string (in which case it is assumed to be JSON). The 'this' object for this call MUST be a WhAjaj.Connector instance in order for callback processing to work properly. This function takes care of the following: - Calling opt.afterSend() - If resp is a string, de-JSON-izing it to an object. - Calling opt.onResponse() - Calling opt.onError() in several common (potential) error cases. - If resp is-a String and opt.jsonp then resp is assumed to be a JSONP-form construct and is eval()d BEFORE opt.onResponse() is called. It is arguable to eval() it first, but the logic integrates better with the non-jsonp handler. The sendImpl() should return immediately after calling this. The sendImpl() must call only one of onSendSuccess() or onSendError(). It must call one of them or it must implement its own response/error handling, which is not recommended because getting the documented semantics of the onError/onResponse/afterSend handling correct can be tedious. */ onSendSuccess:function(request,resp,opt) { var cb = this.callbacks || {}; if( WhAjaj.isFunction(cb.afterSend) ) { try {cb.afterSend( request, opt );} catch(e){} |
︙ | ︙ | |||
664 665 666 667 668 669 670 | }, /** Should be called by sendImpl() implementations after a response has failed to connect (e.g. could not resolve host or timeout reached). This function takes care of most of ensuring that framework-level conventions involving WhAjaj.Connector.options.ajax properties are followed. | | | | | | | | | | 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 | }, /** Should be called by sendImpl() implementations after a response has failed to connect (e.g. could not resolve host or timeout reached). This function takes care of most of ensuring that framework-level conventions involving WhAjaj.Connector.options.ajax properties are followed. The request argument must be the original request passed to the sendImpl() function. It may legally be null for GET requests. The 'this' object for this call MUST be a WhAjaj.Connector instance in order for callback processing to work properly. The opt object should be the normalized AJAX options used for the connection. By convention, the caller of this function "should" set opt.errorMessage to contain a human-readable description of the error. The sendImpl() should return immediately after calling this. The return value from this function is unspecified. */ onSendError: function(request,opt) { var cb = this.callbacks || {}; if( WhAjaj.isFunction(cb.afterSend) ) { try {cb.afterSend( request, opt );} |
︙ | ︙ | |||
702 703 704 705 706 707 708 | try {opt.onError( request, opt );} catch(e) {/*ignore*/} } } }; /** | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 | try {opt.onError( request, opt );} catch(e) {/*ignore*/} } } }; /** WhAjaj.Connector.sendImpls holds several concrete implementations of WhAjaj.Connector.prototype.sendImpl(). To use a specific implementation by default assign WhAjaj.Connector.prototype.sendImpl to one of these functions. The functions defined here require that the 'this' object be-a WhAjaj.Connector instance. Historical notes: a) We once had an implementation based on Prototype, but that library just pisses me off (they change base-most types' prototypes, introducing side-effects in client code which doesn't even use Prototype). The Prototype version at the time had a serious toJSON() bug which caused empty arrays to serialize as the string "[]", which broke a bunch of my code. (That has been fixed in the mean time, but i don't use Prototype.) b) We once had an implementation for the dojo library, If/when the time comes to add Prototype/dojo support, we simply need to port: http://code.google.com/p/jsonmessage/source/browse/trunk/lib/JSONMessage/JSONMessage.inc.js (search that file for "dojo" and "Prototype") to this tree. That code is this code's generic grandfather and they are still very similar, so a port is trivial. */ WhAjaj.Connector.sendImpls = { /** This is a concrete implementation of WhAjaj.Connector.prototype.sendImpl() which uses the environment's native XMLHttpRequest class to send whiki requests and fetch the responses. The only argument must be a connection properties object, as constructed by WhAjaj.Connector.normalizeAjaxParameters(). If window.firebug is set then window.firebug.watchXHR() is called to enable monitoring of the XMLHttpRequest object. This implementation honors the loginName and loginPassword connection parameters. Returns the XMLHttpRequest object. This implementation requires that the 'this' object be-a WhAjaj.Connector. This implementation uses setTimeout() to implement the timeout support, and thus the JS engine must provide that functionality. */ XMLHttpRequest: function(request, args) { var json = WhAjaj.isObject(request) ? JSON.stringify(request) : request; var xhr = new XMLHttpRequest(); var startTime = (new Date()).getTime(); |
︙ | ︙ | |||
862 863 864 865 866 867 868 | { args.errorMessage = e.toString(); WhAjaj.Connector.sendHelper.onSendError.apply( whself, [request, args] ); return undefined; } }/*XMLHttpRequest()*/, /** | | | | | | | | | 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 | { args.errorMessage = e.toString(); WhAjaj.Connector.sendHelper.onSendError.apply( whself, [request, args] ); return undefined; } }/*XMLHttpRequest()*/, /** This is a concrete implementation of WhAjaj.Connector.prototype.sendImpl() which uses the jQuery AJAX API to send requests and fetch the responses. The first argument may be either null/false, an Object containing toJSON-able data to post to the back-end, or such an object in JSON string form. The second argument must be a connection properties object, as constructed by WhAjaj.Connector.normalizeAjaxParameters(). If window.firebug is set then window.firebug.watchXHR() is called to enable monitoring of the XMLHttpRequest object. This implementation honors the loginName and loginPassword connection parameters. Returns the XMLHttpRequest object. This implementation requires that the 'this' object be-a WhAjaj.Connector. */ jQuery:function(request,args) { var data = request || undefined; var whself = this; if( data ) { |
︙ | ︙ | |||
941 942 943 944 945 946 947 | { args.errorMessage = e.toString(); WhAjaj.Connector.sendHelper.onSendError.apply( whself, [request, args] ); return undefined; } }/*jQuery()*/, /** | | | | 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 | { args.errorMessage = e.toString(); WhAjaj.Connector.sendHelper.onSendError.apply( whself, [request, args] ); return undefined; } }/*jQuery()*/, /** This is a concrete implementation of WhAjaj.Connector.prototype.sendImpl() which uses the rhino Java API to send requests and fetch the responses. Limitations vis-a-vis the interface: - timeouts are not supported. - asynchronous mode is not supported because implementing it |
︙ | ︙ | |||
1049 1050 1051 1052 1053 1054 1055 | json = json.join(''); //print("READ IN JSON: "+json); WhAjaj.Connector.sendHelper.onSendSuccess.apply( self, [request, json, args] ); }/*rhino*/ }; /** | | | | | | | | | 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 | json = json.join(''); //print("READ IN JSON: "+json); WhAjaj.Connector.sendHelper.onSendSuccess.apply( self, [request, json, args] ); }/*rhino*/ }; /** An internal function which takes an object containing properties for a WhAjaj.Connector network request. This function creates a new object containing a superset of the properties from: a) opt b) this.options c) WhAjaj.Connector.options.ajax in that order, using the first one it finds. All non-function properties are _deeply_ copied via JSON cloning in order to prevent accidental "cross-request pollenation" (been there, done that). Functions cannot be cloned and are simply copied by reference. This function throws if JSON-copying one of the options fails (e.g. due to cyclic data structures). Reminder to self: this function does not "normalize" opt.urlParam by encoding it into opt.url, mainly for historical reasons, but also because that behaviour was specifically undesirable in this code's genetic father. */ WhAjaj.Connector.prototype.normalizeAjaxParameters = function (opt) { |
︙ | ︙ | |||
1098 1099 1100 1101 1102 1103 1104 | cp( this.options ); cp( WhAjaj.Connector.options.ajax ); // no, not here: rc.url = WhAjaj.Connector.sendHelper.normalizeURL(rc); return rc; }; /** | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 | cp( this.options ); cp( WhAjaj.Connector.options.ajax ); // no, not here: rc.url = WhAjaj.Connector.sendHelper.normalizeURL(rc); return rc; }; /** This is the generic interface for making calls to a back-end JSON-producing request handler. It is a simple wrapper around WhAjaj.Connector.prototype.sendImpl(), which just normalizes the connection options for sendImpl() and makes sure that opt.beforeSend() is (possibly) called. The request parameter must either be false/null/empty or a fully-populated JSON-able request object (which will be sent as unencoded application/json text), depending on the type of request being made. It is never semantically legal (in this API) for request to be a string/number/true/array value. As a rule, only POST requests use the request data. GET requests should encode their data in opt.url or opt.urlParam (see below). opt must contain the network-related parameters for the request. Paramters _not_ set in opt are pulled from this.options or WhAjaj.Connector.options.ajax (in that order, using the first value it finds). Thus the set of connection-level options used for the request are a superset of those various sources. The "normalized" (or "superimposed") opt object's URL may be modified before the request is sent, as follows: if opt.urlParam is a string then it is assumed to be properly URL-encoded parameters and is appended to the opt.url. If it is an Object then it is assumed to be a one-dimensional set of key/value pairs with simple values (numbers, strings, booleans, null, and NOT objects/arrays). The keys/values are URL-encoded and appended to the URL. The beforeSend() callback (see below) can modify the options object before the request attempt is made. The callbacks in the normalized opt object will be triggered as follows (if they are set to Function values): - beforeSend(request,opt) will be called before any network processing starts. If beforeSend() throws then no other callbacks are triggered and this function propagates the exception. This function is passed normalized connection options as its second parameter, and changes this function makes to that object _will_ be used for the pending connection attempt. - onError(request,opt) will be called if a connection to the back-end cannot be established. It will be passed the original request object (which might be null, depending on the request type) and the normalized options object. In the error case, the opt object passed to onError() "should" have a property called "errorMessage" which contains a description of the problem. - onError(request,opt) will also be called if connection succeeds but the response is not JSON data. - onResponse(response,request) will be called if the response returns JSON data. That data might hold an error response code - clients need to check for that. It is passed the response object (a plain object) and the original request object. - afterSend(request,opt) will be called directly after the AJAX request is finished, before onError() or onResonse() are called. Possible TODO: we explicitly do NOT pass the response to this function in order to keep the line between the responsibilities of the various callback clear (otherwise this could be used the same as onResponse()). In practice it would sometimes be useful have the response passed to this function, mainly for logging/debugging |
︙ | ︙ | |||
1177 1178 1179 1180 1181 1182 1183 | { if( !WhAjaj.isFunction(this.sendImpl) ) { throw new Error("This object has no sendImpl() member function! I don't know how to send the request!"); } var ex = false; var av = Array.prototype.slice.apply( arguments, [0] ); | | | | | 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 | { if( !WhAjaj.isFunction(this.sendImpl) ) { throw new Error("This object has no sendImpl() member function! I don't know how to send the request!"); } var ex = false; var av = Array.prototype.slice.apply( arguments, [0] ); /** FIXME: how to handle the error, vis-a-vis- the callbacks, if normalizeAjaxParameters() throws? It can throw if (de)JSON-izing fails. */ var norm = this.normalizeAjaxParameters( WhAjaj.isObject(opt) ? opt : {} ); norm.url = WhAjaj.Connector.sendHelper.normalizeURL(norm); if( ! request ) norm.method = 'GET'; var cb = this.callbacks || {}; if( this.callbacks && WhAjaj.isFunction(this.callbacks.beforeSend) ) { |
︙ | ︙ |
Changes to ajax/wiki-editor.html.
︙ | ︙ | |||
275 276 277 278 279 280 281 | onResponse:function(resp,req){ TheApp.onResponse(resp,req); if(resp.resultCode) return; delete p.isNew; p.timestamp = resp.payload.timestamp; } }); | | | 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 | onResponse:function(resp,req){ TheApp.onResponse(resp,req); if(resp.resultCode) return; delete p.isNew; p.timestamp = resp.payload.timestamp; } }); }; TheApp.createNewPage = function(){ var name = prompt("New page name?"); if(!name) return; var p = { name:name, |
︙ | ︙ |
Changes to auto.def.
︙ | ︙ | |||
86 87 88 89 90 91 92 | # search for the system SQLite once with -ldl, and once without. If # the library can only be found with $extralibs set to -ldl, then # the code below will append -ldl to LIBS. # foreach extralibs {{} {-ldl}} { # Locate the system SQLite by searching for sqlite3_open(). Then check | | | | | | 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 | # search for the system SQLite once with -ldl, and once without. If # the library can only be found with $extralibs set to -ldl, then # the code below will append -ldl to LIBS. # foreach extralibs {{} {-ldl}} { # Locate the system SQLite by searching for sqlite3_open(). Then check # if sqlite3_trace_v2() can be found as well. If we can find open() but # not trace_v2(), then the system SQLite is too old to link against # fossil. # if {[check-function-in-lib sqlite3_open sqlite3 $extralibs]} { if {![check-function-in-lib sqlite3_trace_v2 sqlite3 $extralibs]} { user-error "system sqlite3 too old (require >= 3.14.0)" } # Success. Update symbols and return. # define USE_SYSTEM_SQLITE 1 define-append LIBS -lsqlite3 define-append LIBS $extralibs |
︙ | ︙ | |||
192 193 194 195 196 197 198 | } # Helper for OpenSSL checking proc check-for-openssl {msg {cflags {}} {libs {-lssl -lcrypto}}} { msg-checking "Checking for $msg..." set rc 0 if {[is_mingw]} { | | | 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 | } # Helper for OpenSSL checking proc check-for-openssl {msg {cflags {}} {libs {-lssl -lcrypto}}} { msg-checking "Checking for $msg..." set rc 0 if {[is_mingw]} { lappend libs -lgdi32 -lwsock32 -lcrypt32 } if {[info exists ::zlib_lib]} { lappend libs $::zlib_lib } msg-quiet cc-with [list -cflags $cflags -libs $libs] { if {[cc-check-includes openssl/ssl.h] && \ [cc-check-functions SSL_new]} { |
︙ | ︙ | |||
311 312 313 314 315 316 317 | } else { define-append LIBS -lssl -lcrypto } if {[info exists ::zlib_lib]} { define-append LIBS $::zlib_lib } if {[is_mingw]} { | | | 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 | } else { define-append LIBS -lssl -lcrypto } if {[info exists ::zlib_lib]} { define-append LIBS $::zlib_lib } if {[is_mingw]} { define-append LIBS -lgdi32 -lwsock32 -lcrypt32 } msg-result "HTTPS support enabled" # Silence OpenSSL deprecation warnings on Mac OS X 10.7. if {[string match *-darwin* [get-define host]]} { if {[cctest -cflags {-Wdeprecated-declarations}]} { define-append EXTRA_CFLAGS -Wdeprecated-declarations |
︙ | ︙ | |||
477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 | } cc-check-function-in-lib dlopen dl cc-check-function-in-lib sin m # Check for the FuseFS library if {[opt-bool fusefs]} { if {[cc-check-function-in-lib fuse_mount fuse]} { define FOSSIL_HAVE_FUSEFS 1 define-append LIBS -lfuse msg-result "FuseFS support enabled" } } make-template Makefile.in make-config-header autoconfig.h -auto {USE_* FOSSIL_*} | > | 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 | } cc-check-function-in-lib dlopen dl cc-check-function-in-lib sin m # Check for the FuseFS library if {[opt-bool fusefs]} { if {[cc-check-function-in-lib fuse_mount fuse]} { define-append EXTRA_CFLAGS -DFOSSIL_HAVE_FUSEFS define FOSSIL_HAVE_FUSEFS 1 define-append LIBS -lfuse msg-result "FuseFS support enabled" } } make-template Makefile.in make-config-header autoconfig.h -auto {USE_* FOSSIL_*} |
Changes to autosetup/README.autosetup.
|
| | | 1 | This is autosetup v0.6.6. See http://msteveb.github.com/autosetup/ |
Changes to autosetup/autosetup.
1 2 3 4 5 6 7 | #!/bin/sh # Copyright (c) 2006-2011 WorkWare Systems http://www.workware.net.au/ # All rights reserved # vim:se syntax=tcl: # \ dir=`dirname "$0"`; exec "`$dir/find-tclsh`" "$0" "$@" | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | #!/bin/sh # Copyright (c) 2006-2011 WorkWare Systems http://www.workware.net.au/ # All rights reserved # vim:se syntax=tcl: # \ dir=`dirname "$0"`; exec "`$dir/find-tclsh`" "$0" "$@" set autosetup(version) 0.6.6 # Can be set to 1 to debug early-init problems set autosetup(debug) 0 ################################################################## # # Main flow of control, option handling |
︙ | ︙ | |||
131 132 133 134 135 136 137 | } if {[opt-val {manual ref reference}] ne ""} { use help autosetup_reference [opt-val {manual ref reference}] } | < | < < | > > > > > > > > > > > | | | > > | 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 | } if {[opt-val {manual ref reference}] ne ""} { use help autosetup_reference [opt-val {manual ref reference}] } # Allow combining --install and --init set earlyexit 0 if {[opt-val install] ne ""} { use install autosetup_install [opt-val install] incr earlyexit } if {[opt-val init] ne ""} { use init autosetup_init [opt-val init] incr earlyexit } if {$earlyexit} { exit 0 } if {![file exists $autosetup(autodef)]} { # Check for invalid option first options {} user-error "No auto.def found in \"$autosetup(srcdir)\" (use [file tail $::autosetup(exe)] --init to create one)" } # Parse extra arguments into autosetup(cmdline) foreach arg $argv { if {[regexp {([^=]*)=(.*)} $arg -> n v]} { dict set autosetup(cmdline) $n $v define $n $v } else { user-error "Unexpected parameter: $arg" } } autosetup_add_dep $autosetup(autodef) define CONFIGURE_OPTS "" foreach arg $autosetup(argv) { define-append CONFIGURE_OPTS [quote-if-needed $arg] } define AUTOREMAKE [file-normalize $autosetup(exe)] define-append AUTOREMAKE [get-define CONFIGURE_OPTS] # Log how we were invoked configlog "Invoked as: [getenv WRAPPER $::argv0] [quote-argv $autosetup(argv)]" # Note that auto.def is *not* loaded in the global scope source $autosetup(autodef) |
︙ | ︙ | |||
190 191 192 193 194 195 196 197 198 199 200 201 202 203 | exit 0 } # @opt-bool option ... # # Check each of the named, boolean options and return 1 if any of them have # been set by the user. # proc opt-bool {args} { option-check-names {*}$args opt_bool ::useropts {*}$args } # @opt-val option-list ?default=""? | > > | 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 | exit 0 } # @opt-bool option ... # # Check each of the named, boolean options and return 1 if any of them have # been set by the user. # If the option was specified more than once, the last value wins. # e.g. With --enable-foo --disable-foo, [opt-bool foo] will return 0 # proc opt-bool {args} { option-check-names {*}$args opt_bool ::useropts {*}$args } # @opt-val option-list ?default=""? |
︙ | ︙ | |||
396 397 398 399 400 401 402 | # # An argument option (one which takes a parameter) is of the form: # ## name:[=]value => "Description of this option" # # If the name:value form is used, the value must be provided with the option (as --name=myvalue). # If the name:=value form is used, the value is optional and the given value is used as the default | | | 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 | # # An argument option (one which takes a parameter) is of the form: # ## name:[=]value => "Description of this option" # # If the name:value form is used, the value must be provided with the option (as --name=myvalue). # If the name:=value form is used, the value is optional and the given value is used as the default # if it is not provided. # # Undocumented options are also supported by omitting the "=> description. # These options are not displayed with --help and can be useful for internal options or as aliases. # # For example, --disable-lfs is an alias for --disable=largefile: # ## lfs=1 largefile=1 => "Disable large file support" |
︙ | ︙ | |||
456 457 458 459 460 461 462 463 464 465 466 467 468 469 | # These (name, value) pairs represent the results of the configuration check # and are available to be checked, modified and substituted. # proc define {name {value 1}} { set ::define($name) $value #dputs "$name <= $value" } # @define-append name value ... # # Appends the given value(s) to the given 'defined' variable. # If the variable is not defined or empty, it is set to $value. # Otherwise the value is appended, separated by a space. # Any extra values are similarly appended. | > > > > > > > > > | 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 | # These (name, value) pairs represent the results of the configuration check # and are available to be checked, modified and substituted. # proc define {name {value 1}} { set ::define($name) $value #dputs "$name <= $value" } # @undefine name # # Undefine the named variable # proc undefine {name} { unset -nocomplain ::define($name) #dputs "$name <= <undef>" } # @define-append name value ... # # Appends the given value(s) to the given 'defined' variable. # If the variable is not defined or empty, it is set to $value. # Otherwise the value is appended, separated by a space. # Any extra values are similarly appended. |
︙ | ︙ | |||
544 545 546 547 548 549 550 | } return 0 } # @readfile filename ?default=""? # # Return the contents of the file, without the trailing newline. | | | 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 | } return 0 } # @readfile filename ?default=""? # # Return the contents of the file, without the trailing newline. # If the file doesn't exist or can't be read, returns $default. # proc readfile {filename {default_value ""}} { set result $default_value catch { set f [open $filename] set result [read -nonewline $f] close $f |
︙ | ︙ | |||
1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 | # All rights reserved # Simple getopt module # Parse everything out of the argv list which looks like an option # Knows about --enable-thing and --disable-thing as alternatives for --thing=0 or --thing=1 # Everything which doesn't look like an option, or is after --, is left unchanged proc getopt {argvname} { upvar $argvname argv set nargv {} for {set i 0} {$i < [llength $argv]} {incr i} { set arg [lindex $argv $i] | > | 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 | # All rights reserved # Simple getopt module # Parse everything out of the argv list which looks like an option # Knows about --enable-thing and --disable-thing as alternatives for --thing=0 or --thing=1 # Everything which doesn't look like an option, or is after --, is left unchanged # proc getopt {argvname} { upvar $argvname argv set nargv {} for {set i 0} {$i < [llength $argv]} {incr i} { set arg [lindex $argv $i] |
︙ | ︙ | |||
1139 1140 1141 1142 1143 1144 1145 | # Support the args being passed as a list if {[llength $args] == 1} { set args [lindex $args 0] } foreach o $args { if {[info exists opts($o)]} { | > | | 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 | # Support the args being passed as a list if {[llength $args] == 1} { set args [lindex $args 0] } foreach o $args { if {[info exists opts($o)]} { # For boolean options, the last value wins if {[lindex $opts($o) end] in {"1" "yes"}} { return 1 } } } return 0 } } |
︙ | ︙ | |||
1338 1339 1340 1341 1342 1343 1344 | if {$help} { puts "Use one of the following types (e.g. --init=make)\n" foreach type [lsort [dict keys $::autosetup(inittypes)]] { lassign [dict get $::autosetup(inittypes) $type] desc # XXX: Use the options-show code to wrap the description puts [format "%-10s %s" $type $desc] } | | < < | 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 | if {$help} { puts "Use one of the following types (e.g. --init=make)\n" foreach type [lsort [dict keys $::autosetup(inittypes)]] { lassign [dict get $::autosetup(inittypes) $type] desc # XXX: Use the options-show code to wrap the description puts [format "%-10s %s" $type $desc] } return } lassign [dict get $::autosetup(inittypes) $type] desc script puts "Initialising $type: $desc\n" # All initialisations happens in the top level srcdir cd $::autosetup(srcdir) uplevel #0 $script } proc autosetup_add_init_type {type desc script} { dict set ::autosetup(inittypes) $type [list $desc $script] } # This is for in creating build-system init scripts |
︙ | ︙ | |||
1391 1392 1393 1394 1395 1396 1397 | proc autosetup_install {dir} { if {[catch { cd $dir file mkdir autosetup set f [open autosetup/autosetup w] | | | 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 | proc autosetup_install {dir} { if {[catch { cd $dir file mkdir autosetup set f [open autosetup/autosetup w] set publicmodules [glob $::autosetup(libdir)/*.auto] # First the main script, but only up until "CUT HERE" set in [open $::autosetup(dir)/autosetup] while {[gets $in buf] >= 0} { if {$buf ne "##-- CUT HERE --##"} { puts $f $buf continue |
︙ | ︙ | |||
1442 1443 1444 1445 1446 1447 1448 | } error]} { user-error "Failed to install autosetup: $error" } puts "Installed [autosetup_version] to autosetup/" # Now create 'configure' if necessary autosetup_create_configure | < < | 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 | } error]} { user-error "Failed to install autosetup: $error" } puts "Installed [autosetup_version] to autosetup/" # Now create 'configure' if necessary autosetup_create_configure } proc autosetup_create_configure {} { if {[file exists configure]} { if {!$::autosetup(force)} { # Could this be an autosetup configure? if {![string match "*\nWRAPPER=*" [readfile configure]]} { |
︙ | ︙ |
Changes to autosetup/cc-db.tcl.
1 2 3 4 5 | # Copyright (c) 2011 WorkWare Systems http://www.workware.net.au/ # All rights reserved # @synopsis: # | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 | # Copyright (c) 2011 WorkWare Systems http://www.workware.net.au/ # All rights reserved # @synopsis: # # The 'cc-db' module provides a knowledge based of system idiosyncrasies # In general, this module can always be included use cc module-options {} # openbsd needs sys/types.h to detect some system headers |
︙ | ︙ |
Changes to autosetup/cc-shared.tcl.
︙ | ︙ | |||
12 13 14 15 16 17 18 | ## SH_SOEXT Extension for shared libs ## SH_SOEXTVER Format for versioned shared libs - %s = version ## SHOBJ_CFLAGS Flags to use compiling sources destined for a shared object ## SHOBJ_LDFLAGS Flags to use linking a shared object, undefined symbols allowed ## SHOBJ_LDFLAGS_R - as above, but all symbols must be resolved ## SH_LINKFLAGS Flags to use linking an executable which will load shared objects ## LD_LIBRARY_PATH Environment variable which specifies path to shared libraries | | | 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | ## SH_SOEXT Extension for shared libs ## SH_SOEXTVER Format for versioned shared libs - %s = version ## SHOBJ_CFLAGS Flags to use compiling sources destined for a shared object ## SHOBJ_LDFLAGS Flags to use linking a shared object, undefined symbols allowed ## SHOBJ_LDFLAGS_R - as above, but all symbols must be resolved ## SH_LINKFLAGS Flags to use linking an executable which will load shared objects ## LD_LIBRARY_PATH Environment variable which specifies path to shared libraries ## STRIPLIBFLAGS Arguments to strip a dynamic library module-options {} # Defaults: gcc on unix define SHOBJ_CFLAGS -fpic define SHOBJ_LDFLAGS -shared define SH_CFLAGS -fpic |
︙ | ︙ |
Changes to autosetup/cc.tcl.
︙ | ︙ | |||
160 161 162 163 164 165 166 | cctest_define $each } } # @cc-check-decls name ... # # Checks that each given name is either a preprocessor symbol or rvalue | | | 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 | cctest_define $each } } # @cc-check-decls name ... # # Checks that each given name is either a preprocessor symbol or rvalue # such as an enum. Note that the define used is HAVE_DECL_xxx # rather than HAVE_xxx proc cc-check-decls {args} { set ret 1 foreach name $args { msg-checking "Checking for $name..." set r [cctest_decl $name] define-feature "decl $name" $r |
︙ | ︙ | |||
199 200 201 202 203 204 205 | cc-check-some-feature $args { cctest_member $each } } # @cc-check-function-in-lib function libs ?otherlibs? # | | | 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 | cc-check-some-feature $args { cctest_member $each } } # @cc-check-function-in-lib function libs ?otherlibs? # # Checks that the given function can be found in one of the libs. # # First checks for no library required, then checks each of the libraries # in turn. # # If the function is found, the feature is defined and lib_$function is defined # to -l$lib where the function was found, or "" if no library required. # In addition, -l$lib is added to the LIBS define. |
︙ | ︙ | |||
283 284 285 286 287 288 289 | # @cc-check-progs prog ... # # Checks for existence of the given executables on the path. # # For example, when checking for "grep", the path is searched for # the executable, 'grep', and if found GREP is defined as "grep". # | | | 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 | # @cc-check-progs prog ... # # Checks for existence of the given executables on the path. # # For example, when checking for "grep", the path is searched for # the executable, 'grep', and if found GREP is defined as "grep". # # If the executable is not found, the variable is defined as false. # Returns 1 if all programs were found, or 0 otherwise. # proc cc-check-progs {args} { set failed 0 foreach prog $args { set PROG [string toupper $prog] msg-checking "Checking for $prog..." |
︙ | ︙ |
Changes to autosetup/config.guess.
1 2 3 4 | #! /bin/sh # Attempt to guess a canonical system name. # Copyright 1992-2014 Free Software Foundation, Inc. | | | 1 2 3 4 5 6 7 8 9 10 11 12 | #! /bin/sh # Attempt to guess a canonical system name. # Copyright 1992-2014 Free Software Foundation, Inc. timestamp='2014-11-04' # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but |
︙ | ︙ | |||
20 21 22 23 24 25 26 | # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that # program. This Exception is an additional permission under section 7 # of the GNU General Public License, version 3 ("GPLv3"). # | | | | 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 | # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that # program. This Exception is an additional permission under section 7 # of the GNU General Public License, version 3 ("GPLv3"). # # Originally written by Per Bothner; maintained since 2000 by Ben Elliston. # # You can get the latest version of this script from: # http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.guess;hb=HEAD # # Please send patches to <config-patches@gnu.org>. me=`echo "$0" | sed -e 's,.*/,,'` usage="\ Usage: $0 [OPTION] |
︙ | ︙ | |||
575 576 577 578 579 580 581 | *:AIX:*:[4567]) IBM_CPU_ID=`/usr/sbin/lsdev -C -c processor -S available | sed 1q | awk '{ print $1 }'` if /usr/sbin/lsattr -El ${IBM_CPU_ID} | grep ' POWER' >/dev/null 2>&1; then IBM_ARCH=rs6000 else IBM_ARCH=powerpc fi | | | > | 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 | *:AIX:*:[4567]) IBM_CPU_ID=`/usr/sbin/lsdev -C -c processor -S available | sed 1q | awk '{ print $1 }'` if /usr/sbin/lsattr -El ${IBM_CPU_ID} | grep ' POWER' >/dev/null 2>&1; then IBM_ARCH=rs6000 else IBM_ARCH=powerpc fi if [ -x /usr/bin/lslpp ] ; then IBM_REV=`/usr/bin/lslpp -Lqc bos.rte.libc | awk -F: '{ print $3 }' | sed s/[0-9]*$/0/` else IBM_REV=${UNAME_VERSION}.${UNAME_RELEASE} fi echo ${IBM_ARCH}-ibm-aix${IBM_REV} exit ;; *:AIX:*:*) echo rs6000-ibm-aix |
︙ | ︙ |
Changes to autosetup/config.sub.
1 2 3 4 | #! /bin/sh # Configuration validation subroutine script. # Copyright 1992-2014 Free Software Foundation, Inc. | | | 1 2 3 4 5 6 7 8 9 10 11 12 | #! /bin/sh # Configuration validation subroutine script. # Copyright 1992-2014 Free Software Foundation, Inc. timestamp='2014-12-03' # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but |
︙ | ︙ | |||
21 22 23 24 25 26 27 | # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that # program. This Exception is an additional permission under section 7 # of the GNU General Public License, version 3 ("GPLv3"). | | | 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 | # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that # program. This Exception is an additional permission under section 7 # of the GNU General Public License, version 3 ("GPLv3"). # Please send patches to <config-patches@gnu.org>. # # Configuration subroutine to validate and canonicalize a configuration type. # Supply the specified configuration type as an argument. # If it is invalid, we print an error message on stderr and exit with code 1. # Otherwise, we print the canonical config type on stdout and succeed. # You can get the latest version of this script from: |
︙ | ︙ | |||
298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 | | nds32 | nds32le | nds32be \ | nios | nios2 | nios2eb | nios2el \ | ns16k | ns32k \ | open8 | or1k | or1knd | or32 \ | pdp10 | pdp11 | pj | pjl \ | powerpc | powerpc64 | powerpc64le | powerpcle \ | pyramid \ | rl78 | rx \ | score \ | sh | sh[1234] | sh[24]a | sh[24]aeb | sh[23]e | sh[34]eb | sheb | shbe | shle | sh[1234]le | sh3ele \ | sh64 | sh64le \ | sparc | sparc64 | sparc64b | sparc64v | sparc86x | sparclet | sparclite \ | sparcv8 | sparcv9 | sparcv9b | sparcv9v \ | spu \ | tahoe | tic4x | tic54x | tic55x | tic6x | tic80 | tron \ | ubicom32 \ | v850 | v850e | v850e1 | v850e2 | v850es | v850e2v3 \ | we32k \ | x86 | xc16x | xstormy16 | xtensa \ | z8k | z80) basic_machine=$basic_machine-unknown ;; c54x) basic_machine=tic54x-unknown ;; c55x) basic_machine=tic55x-unknown ;; c6x) basic_machine=tic6x-unknown ;; m6811 | m68hc11 | m6812 | m68hc12 | m68hcs12x | nvptx | picochip) basic_machine=$basic_machine-unknown os=-none ;; m88110 | m680[12346]0 | m683?2 | m68360 | m5200 | v70 | w65 | z8k) ;; ms1) | > > > > > | 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 | | nds32 | nds32le | nds32be \ | nios | nios2 | nios2eb | nios2el \ | ns16k | ns32k \ | open8 | or1k | or1knd | or32 \ | pdp10 | pdp11 | pj | pjl \ | powerpc | powerpc64 | powerpc64le | powerpcle \ | pyramid \ | riscv32 | riscv64 \ | rl78 | rx \ | score \ | sh | sh[1234] | sh[24]a | sh[24]aeb | sh[23]e | sh[34]eb | sheb | shbe | shle | sh[1234]le | sh3ele \ | sh64 | sh64le \ | sparc | sparc64 | sparc64b | sparc64v | sparc86x | sparclet | sparclite \ | sparcv8 | sparcv9 | sparcv9b | sparcv9v \ | spu \ | tahoe | tic4x | tic54x | tic55x | tic6x | tic80 | tron \ | ubicom32 \ | v850 | v850e | v850e1 | v850e2 | v850es | v850e2v3 \ | visium \ | we32k \ | x86 | xc16x | xstormy16 | xtensa \ | z8k | z80) basic_machine=$basic_machine-unknown ;; c54x) basic_machine=tic54x-unknown ;; c55x) basic_machine=tic55x-unknown ;; c6x) basic_machine=tic6x-unknown ;; leon|leon[3-9]) basic_machine=sparc-$basic_machine ;; m6811 | m68hc11 | m6812 | m68hc12 | m68hcs12x | nvptx | picochip) basic_machine=$basic_machine-unknown os=-none ;; m88110 | m680[12346]0 | m683?2 | m68360 | m5200 | v70 | w65 | z8k) ;; ms1) |
︙ | ︙ | |||
432 433 434 435 436 437 438 439 440 441 442 443 444 445 | | tahoe-* \ | tic30-* | tic4x-* | tic54x-* | tic55x-* | tic6x-* | tic80-* \ | tile*-* \ | tron-* \ | ubicom32-* \ | v850-* | v850e-* | v850e1-* | v850es-* | v850e2-* | v850e2v3-* \ | vax-* \ | we32k-* \ | x86-* | x86_64-* | xc16x-* | xps100-* \ | xstormy16-* | xtensa*-* \ | ymp-* \ | z8k-* | z80-*) ;; # Recognize the basic CPU types without company name, with glob match. | > | 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 | | tahoe-* \ | tic30-* | tic4x-* | tic54x-* | tic55x-* | tic6x-* | tic80-* \ | tile*-* \ | tron-* \ | ubicom32-* \ | v850-* | v850e-* | v850e1-* | v850es-* | v850e2-* | v850e2v3-* \ | vax-* \ | visium-* \ | we32k-* \ | x86-* | x86_64-* | xc16x-* | xps100-* \ | xstormy16-* | xtensa*-* \ | ymp-* \ | z8k-* | z80-*) ;; # Recognize the basic CPU types without company name, with glob match. |
︙ | ︙ | |||
769 770 771 772 773 774 775 776 777 778 779 780 781 782 | ;; esac ;; isi68 | isi) basic_machine=m68k-isi os=-sysv ;; m68knommu) basic_machine=m68k-unknown os=-linux ;; m68knommu-*) basic_machine=m68k-`echo $basic_machine | sed 's/^[^-]*-//'` os=-linux | > > > | 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 | ;; esac ;; isi68 | isi) basic_machine=m68k-isi os=-sysv ;; leon-*|leon[3-9]-*) basic_machine=sparc-`echo $basic_machine | sed 's/-.*//'` ;; m68knommu) basic_machine=m68k-unknown os=-linux ;; m68knommu-*) basic_machine=m68k-`echo $basic_machine | sed 's/^[^-]*-//'` os=-linux |
︙ | ︙ | |||
824 825 826 827 828 829 830 831 832 833 834 835 836 837 | basic_machine=m68k-rom68k os=-coff ;; morphos) basic_machine=powerpc-unknown os=-morphos ;; msdos) basic_machine=i386-pc os=-msdos ;; ms1-*) basic_machine=`echo $basic_machine | sed -e 's/ms1-/mt-/'` ;; | > > > > | 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 | basic_machine=m68k-rom68k os=-coff ;; morphos) basic_machine=powerpc-unknown os=-morphos ;; moxiebox) basic_machine=moxie-unknown os=-moxiebox ;; msdos) basic_machine=i386-pc os=-msdos ;; ms1-*) basic_machine=`echo $basic_machine | sed -e 's/ms1-/mt-/'` ;; |
︙ | ︙ | |||
1369 1370 1371 1372 1373 1374 1375 | | -bosx* | -nextstep* | -cxux* | -aout* | -elf* | -oabi* \ | -ptx* | -coff* | -ecoff* | -winnt* | -domain* | -vsta* \ | -udi* | -eabi* | -lites* | -ieee* | -go32* | -aux* \ | -chorusos* | -chorusrdb* | -cegcc* \ | -cygwin* | -msys* | -pe* | -psos* | -moss* | -proelf* | -rtems* \ | -mingw32* | -mingw64* | -linux-gnu* | -linux-android* \ | -linux-newlib* | -linux-musl* | -linux-uclibc* \ | | | 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 | | -bosx* | -nextstep* | -cxux* | -aout* | -elf* | -oabi* \ | -ptx* | -coff* | -ecoff* | -winnt* | -domain* | -vsta* \ | -udi* | -eabi* | -lites* | -ieee* | -go32* | -aux* \ | -chorusos* | -chorusrdb* | -cegcc* \ | -cygwin* | -msys* | -pe* | -psos* | -moss* | -proelf* | -rtems* \ | -mingw32* | -mingw64* | -linux-gnu* | -linux-android* \ | -linux-newlib* | -linux-musl* | -linux-uclibc* \ | -uxpv* | -beos* | -mpeix* | -udk* | -moxiebox* \ | -interix* | -uwin* | -mks* | -rhapsody* | -darwin* | -opened* \ | -openstep* | -oskit* | -conix* | -pw32* | -nonstopux* \ | -storm-chaos* | -tops10* | -tenex* | -tops20* | -its* \ | -os2* | -vos* | -palmos* | -uclinux* | -nucleus* \ | -morphos* | -superux* | -rtmk* | -rtmk-nova* | -windiss* \ | -powermax* | -dnix* | -nx6 | -nx7 | -sei* | -dragonfly* \ | -skyos* | -haiku* | -rdos* | -toppers* | -drops* | -es* | -tirtos*) |
︙ | ︙ |
Changes to autosetup/jimsh0.c.
1 | /* This is single source file, bootstrap version of Jim Tcl. See http://jim.tcl.tk/ */ | < < | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | /* This is single source file, bootstrap version of Jim Tcl. See http://jim.tcl.tk/ */ #define JIM_TCL_COMPAT #define JIM_ANSIC #define JIM_REGEXP #define HAVE_NO_AUTOCONF #define _JIMAUTOCONF_H #define TCL_LIBRARY "." #define jim_ext_bootstrap #define jim_ext_aio #define jim_ext_readdir #define jim_ext_regexp #define jim_ext_file #define jim_ext_glob #define jim_ext_exec #define jim_ext_clock #define jim_ext_array #define jim_ext_stdlib #define jim_ext_tclcompat #if defined(_MSC_VER) #define TCL_PLATFORM_OS "windows" |
︙ | ︙ | |||
33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 | #define HAVE_SYS_TIME_H #define HAVE_DIRENT_H #define HAVE_UNISTD_H #else #define TCL_PLATFORM_OS "unknown" #define TCL_PLATFORM_PLATFORM "unix" #define TCL_PLATFORM_PATH_SEPARATOR ":" #define HAVE_VFORK #define HAVE_WAITPID #define HAVE_ISATTY #define HAVE_MKSTEMP #define HAVE_LINK #define HAVE_SYS_TIME_H #define HAVE_DIRENT_H #define HAVE_UNISTD_H #endif | > > > > > > | > | > | 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 | #define HAVE_SYS_TIME_H #define HAVE_DIRENT_H #define HAVE_UNISTD_H #else #define TCL_PLATFORM_OS "unknown" #define TCL_PLATFORM_PLATFORM "unix" #define TCL_PLATFORM_PATH_SEPARATOR ":" #ifdef _MINIX #define vfork fork #define _POSIX_SOURCE #else #define _GNU_SOURCE #endif #define HAVE_VFORK #define HAVE_WAITPID #define HAVE_ISATTY #define HAVE_MKSTEMP #define HAVE_LINK #define HAVE_SYS_TIME_H #define HAVE_DIRENT_H #define HAVE_UNISTD_H #endif #define JIM_VERSION 77 #ifndef JIM_WIN32COMPAT_H #define JIM_WIN32COMPAT_H #ifdef __cplusplus extern "C" { #endif #if defined(_WIN32) || defined(WIN32) #define HAVE_DLOPEN void *dlopen(const char *path, int mode); int dlclose(void *handle); void *dlsym(void *handle, const char *symbol); char *dlerror(void); #if defined(__MINGW32__) #define JIM_SPRINTF_DOUBLE_NEEDS_FIX #endif #ifdef _MSC_VER #if _MSC_VER >= 1000 #pragma warning(disable:4146) #endif |
︙ | ︙ | |||
101 102 103 104 105 106 107 | #define HAVE_OPENDIR struct dirent { char *d_name; }; typedef struct DIR { | | | | | | 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 | #define HAVE_OPENDIR struct dirent { char *d_name; }; typedef struct DIR { long handle; struct _finddata_t info; struct dirent result; char *name; } DIR; DIR *opendir(const char *name); int closedir(DIR *dir); struct dirent *readdir(DIR *dir); #elif defined(__MINGW32__) #include <stdlib.h> #define strtod __strtod #endif #endif #ifdef __cplusplus } #endif #endif #ifndef UTF8_UTIL_H |
︙ | ︙ | |||
171 172 173 174 175 176 177 | #ifdef __cplusplus extern "C" { #endif #include <time.h> #include <limits.h> | | | | | 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 | #ifdef __cplusplus extern "C" { #endif #include <time.h> #include <limits.h> #include <stdio.h> #include <stdlib.h> #include <stdarg.h> #ifndef HAVE_NO_AUTOCONF #endif |
︙ | ︙ | |||
220 221 222 223 224 225 226 | #define JIM_BREAK 3 #define JIM_CONTINUE 4 #define JIM_SIGNAL 5 #define JIM_EXIT 6 #define JIM_EVAL 7 | | | | | | | | | | | | | | | 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 | #define JIM_BREAK 3 #define JIM_CONTINUE 4 #define JIM_SIGNAL 5 #define JIM_EXIT 6 #define JIM_EVAL 7 #define JIM_MAX_CALLFRAME_DEPTH 1000 #define JIM_MAX_EVAL_DEPTH 2000 #define JIM_PRIV_FLAG_SHIFT 20 #define JIM_NONE 0 #define JIM_ERRMSG 1 #define JIM_ENUM_ABBREV 2 #define JIM_UNSHARED 4 #define JIM_MUSTEXIST 8 #define JIM_SUBST_NOVAR 1 #define JIM_SUBST_NOCMD 2 #define JIM_SUBST_NOESC 4 #define JIM_SUBST_FLAG 128 #define JIM_CASESENS 0 #define JIM_NOCASE 1 #define JIM_PATH_LEN 1024 #define JIM_NOTUSED(V) ((void) V) |
︙ | ︙ | |||
335 336 337 338 339 340 341 | #define Jim_GetHashEntryVal(he) ((he)->u.val) #define Jim_GetHashTableCollisions(ht) ((ht)->collisions) #define Jim_GetHashTableSize(ht) ((ht)->size) #define Jim_GetHashTableUsed(ht) ((ht)->used) typedef struct Jim_Obj { | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 | #define Jim_GetHashEntryVal(he) ((he)->u.val) #define Jim_GetHashTableCollisions(ht) ((ht)->collisions) #define Jim_GetHashTableSize(ht) ((ht)->size) #define Jim_GetHashTableUsed(ht) ((ht)->used) typedef struct Jim_Obj { char *bytes; const struct Jim_ObjType *typePtr; int refCount; int length; union { jim_wide wideValue; int intValue; double doubleValue; void *ptr; struct { void *ptr1; void *ptr2; } twoPtrValue; struct { struct Jim_Var *varPtr; unsigned long callFrameId; int global; } varValue; struct { struct Jim_Obj *nsObj; struct Jim_Cmd *cmdPtr; unsigned long procEpoch; } cmdValue; struct { struct Jim_Obj **ele; int len; int maxLen; } listValue; struct { int maxLength; int charLength; } strValue; struct { unsigned long id; struct Jim_Reference *refPtr; } refValue; struct { struct Jim_Obj *fileNameObj; int lineNumber; } sourceValue; struct { struct Jim_Obj *varNameObjPtr; struct Jim_Obj *indexObjPtr; } dictSubstValue; struct { void *compre; unsigned flags; } regexpValue; struct { int line; int argc; } scriptLineValue; } internalRep; struct Jim_Obj *prevObjPtr; struct Jim_Obj *nextObjPtr; } Jim_Obj; #define Jim_IncrRefCount(objPtr) \ ++(objPtr)->refCount #define Jim_DecrRefCount(interp, objPtr) \ if (--(objPtr)->refCount <= 0) Jim_FreeObj(interp, objPtr) |
︙ | ︙ | |||
438 439 440 441 442 443 444 | typedef void (Jim_FreeInternalRepProc)(struct Jim_Interp *interp, struct Jim_Obj *objPtr); typedef void (Jim_DupInternalRepProc)(struct Jim_Interp *interp, struct Jim_Obj *srcPtr, Jim_Obj *dupPtr); typedef void (Jim_UpdateStringProc)(struct Jim_Obj *objPtr); typedef struct Jim_ObjType { | | | | < < | | | | | | | | | | | | | < | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 | typedef void (Jim_FreeInternalRepProc)(struct Jim_Interp *interp, struct Jim_Obj *objPtr); typedef void (Jim_DupInternalRepProc)(struct Jim_Interp *interp, struct Jim_Obj *srcPtr, Jim_Obj *dupPtr); typedef void (Jim_UpdateStringProc)(struct Jim_Obj *objPtr); typedef struct Jim_ObjType { const char *name; Jim_FreeInternalRepProc *freeIntRepProc; Jim_DupInternalRepProc *dupIntRepProc; Jim_UpdateStringProc *updateStringProc; int flags; } Jim_ObjType; #define JIM_TYPE_NONE 0 #define JIM_TYPE_REFERENCES 1 typedef struct Jim_CallFrame { unsigned long id; int level; struct Jim_HashTable vars; struct Jim_HashTable *staticVars; struct Jim_CallFrame *parent; Jim_Obj *const *argv; int argc; Jim_Obj *procArgsObjPtr; Jim_Obj *procBodyObjPtr; struct Jim_CallFrame *next; Jim_Obj *nsObj; Jim_Obj *fileNameObj; int line; Jim_Stack *localCommands; struct Jim_Obj *tailcallObj; struct Jim_Cmd *tailcallCmd; } Jim_CallFrame; typedef struct Jim_Var { Jim_Obj *objPtr; struct Jim_CallFrame *linkFramePtr; } Jim_Var; typedef int Jim_CmdProc(struct Jim_Interp *interp, int argc, Jim_Obj *const *argv); typedef void Jim_DelCmdProc(struct Jim_Interp *interp, void *privData); typedef struct Jim_Cmd { int inUse; int isproc; struct Jim_Cmd *prevCmd; union { struct { Jim_CmdProc *cmdProc; Jim_DelCmdProc *delProc; void *privData; } native; struct { Jim_Obj *argListObjPtr; Jim_Obj *bodyObjPtr; Jim_HashTable *staticVars; int argListLen; int reqArity; int optArity; int argsPos; int upcall; struct Jim_ProcArg { Jim_Obj *nameObjPtr; Jim_Obj *defaultObjPtr; } *arglist; Jim_Obj *nsObj; } proc; } u; } Jim_Cmd; typedef struct Jim_PrngState { unsigned char sbox[256]; unsigned int i, j; } Jim_PrngState; typedef struct Jim_Interp { Jim_Obj *result; int errorLine; Jim_Obj *errorFileNameObj; int addStackTrace; int maxCallFrameDepth; int maxEvalDepth; int evalDepth; int returnCode; int returnLevel; int exitCode; long id; int signal_level; jim_wide sigmask; int (*signal_set_result)(struct Jim_Interp *interp, jim_wide sigmask); Jim_CallFrame *framePtr; Jim_CallFrame *topFramePtr; struct Jim_HashTable commands; unsigned long procEpoch; /* Incremented every time the result of procedures names lookup caching may no longer be valid. */ unsigned long callFrameEpoch; /* Incremented every time a new callframe is created. This id is used for the 'ID' field contained in the Jim_CallFrame structure. */ int local; Jim_Obj *liveList; Jim_Obj *freeList; Jim_Obj *currentScriptObj; Jim_Obj *nullScriptObj; Jim_Obj *emptyObj; Jim_Obj *trueObj; Jim_Obj *falseObj; unsigned long referenceNextId; struct Jim_HashTable references; unsigned long lastCollectId; /* reference max Id of the last GC execution. It's set to -1 while the collection is running as sentinel to avoid to recursive calls via the [collect] command inside finalizers. */ time_t lastCollectTime; Jim_Obj *stackTrace; Jim_Obj *errorProc; Jim_Obj *unknown; int unknown_called; int errorFlag; void *cmdPrivData; /* Used to pass the private data pointer to a command. It is set to what the user specified via Jim_CreateCommand(). */ struct Jim_CallFrame *freeFramesList; struct Jim_HashTable assocData; Jim_PrngState *prngState; struct Jim_HashTable packages; Jim_Stack *loadHandles; } Jim_Interp; #define Jim_InterpIncrProcEpoch(i) (i)->procEpoch++ #define Jim_SetResultString(i,s,l) Jim_SetResult(i, Jim_NewStringObj(i,s,l)) #define Jim_SetResultInt(i,intval) Jim_SetResult(i, Jim_NewIntObj(i,intval)) #define Jim_SetResultBool(i,b) Jim_SetResultInt(i, b) |
︙ | ︙ | |||
735 736 737 738 739 740 741 | JIM_EXPORT int Jim_GetExitCode (Jim_Interp *interp); JIM_EXPORT const char *Jim_ReturnCode(int code); JIM_EXPORT void Jim_SetResultFormatted(Jim_Interp *interp, const char *format, ...); JIM_EXPORT void Jim_RegisterCoreCommands (Jim_Interp *interp); JIM_EXPORT int Jim_CreateCommand (Jim_Interp *interp, | | | | 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 | JIM_EXPORT int Jim_GetExitCode (Jim_Interp *interp); JIM_EXPORT const char *Jim_ReturnCode(int code); JIM_EXPORT void Jim_SetResultFormatted(Jim_Interp *interp, const char *format, ...); JIM_EXPORT void Jim_RegisterCoreCommands (Jim_Interp *interp); JIM_EXPORT int Jim_CreateCommand (Jim_Interp *interp, const char *cmdName, Jim_CmdProc *cmdProc, void *privData, Jim_DelCmdProc *delProc); JIM_EXPORT int Jim_DeleteCommand (Jim_Interp *interp, const char *cmdName); JIM_EXPORT int Jim_RenameCommand (Jim_Interp *interp, const char *oldName, const char *newName); JIM_EXPORT Jim_Cmd * Jim_GetCommand (Jim_Interp *interp, Jim_Obj *objPtr, int flags); JIM_EXPORT int Jim_SetVariable (Jim_Interp *interp, |
︙ | ︙ | |||
830 831 832 833 834 835 836 837 838 839 840 841 842 843 | JIM_EXPORT int Jim_EvalExpression (Jim_Interp *interp, Jim_Obj *exprObjPtr, Jim_Obj **exprResultPtrPtr); JIM_EXPORT int Jim_GetBoolFromExpr (Jim_Interp *interp, Jim_Obj *exprObjPtr, int *boolPtr); JIM_EXPORT int Jim_GetWide (Jim_Interp *interp, Jim_Obj *objPtr, jim_wide *widePtr); JIM_EXPORT int Jim_GetLong (Jim_Interp *interp, Jim_Obj *objPtr, long *longPtr); #define Jim_NewWideObj Jim_NewIntObj JIM_EXPORT Jim_Obj * Jim_NewIntObj (Jim_Interp *interp, | > > > > | 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 | JIM_EXPORT int Jim_EvalExpression (Jim_Interp *interp, Jim_Obj *exprObjPtr, Jim_Obj **exprResultPtrPtr); JIM_EXPORT int Jim_GetBoolFromExpr (Jim_Interp *interp, Jim_Obj *exprObjPtr, int *boolPtr); JIM_EXPORT int Jim_GetBoolean(Jim_Interp *interp, Jim_Obj *objPtr, int *booleanPtr); JIM_EXPORT int Jim_GetWide (Jim_Interp *interp, Jim_Obj *objPtr, jim_wide *widePtr); JIM_EXPORT int Jim_GetLong (Jim_Interp *interp, Jim_Obj *objPtr, long *longPtr); #define Jim_NewWideObj Jim_NewIntObj JIM_EXPORT Jim_Obj * Jim_NewIntObj (Jim_Interp *interp, |
︙ | ︙ | |||
851 852 853 854 855 856 857 | JIM_EXPORT Jim_Obj * Jim_NewDoubleObj(Jim_Interp *interp, double doubleValue); JIM_EXPORT void Jim_WrongNumArgs (Jim_Interp *interp, int argc, Jim_Obj *const *argv, const char *msg); JIM_EXPORT int Jim_GetEnum (Jim_Interp *interp, Jim_Obj *objPtr, const char * const *tablePtr, int *indexPtr, const char *name, int flags); | | | > | 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 | JIM_EXPORT Jim_Obj * Jim_NewDoubleObj(Jim_Interp *interp, double doubleValue); JIM_EXPORT void Jim_WrongNumArgs (Jim_Interp *interp, int argc, Jim_Obj *const *argv, const char *msg); JIM_EXPORT int Jim_GetEnum (Jim_Interp *interp, Jim_Obj *objPtr, const char * const *tablePtr, int *indexPtr, const char *name, int flags); JIM_EXPORT int Jim_ScriptIsComplete(Jim_Interp *interp, Jim_Obj *scriptObj, char *stateCharPtr); JIM_EXPORT int Jim_FindByName(const char *name, const char * const array[], size_t len); typedef void (Jim_InterpDeleteProc)(Jim_Interp *interp, void *data); JIM_EXPORT void * Jim_GetAssocData(Jim_Interp *interp, const char *key); JIM_EXPORT int Jim_SetAssocData(Jim_Interp *interp, const char *key, Jim_InterpDeleteProc *delProc, void *data); |
︙ | ︙ | |||
902 903 904 905 906 907 908 | JIM_EXPORT int Jim_IsDict(Jim_Obj *objPtr); JIM_EXPORT int Jim_IsList(Jim_Obj *objPtr); #ifdef __cplusplus } #endif | | | | | | | | | | | 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 | JIM_EXPORT int Jim_IsDict(Jim_Obj *objPtr); JIM_EXPORT int Jim_IsList(Jim_Obj *objPtr); #ifdef __cplusplus } #endif #endif #ifndef JIM_SUBCMD_H #define JIM_SUBCMD_H #ifdef __cplusplus extern "C" { #endif #define JIM_MODFLAG_HIDDEN 0x0001 #define JIM_MODFLAG_FULLARGV 0x0002 typedef int jim_subcmd_function(Jim_Interp *interp, int argc, Jim_Obj *const *argv); typedef struct { const char *cmd; const char *args; jim_subcmd_function *function; short minargs; short maxargs; unsigned short flags; } jim_subcmd_type; const jim_subcmd_type * Jim_ParseSubCmd(Jim_Interp *interp, const jim_subcmd_type *command_table, int argc, Jim_Obj *const *argv); int Jim_SubCmdProc(Jim_Interp *interp, int argc, Jim_Obj *const *argv); |
︙ | ︙ | |||
958 959 960 961 962 963 964 | typedef struct { int rm_so; int rm_eo; } regmatch_t; typedef struct regexp { | | | | | | | | | | | | | | | | | | | | | | | | | | | 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 | typedef struct { int rm_so; int rm_eo; } regmatch_t; typedef struct regexp { int re_nsub; int cflags; int err; int regstart; int reganch; int regmust; int regmlen; int *program; const char *regparse; int p; int proglen; int eflags; const char *start; const char *reginput; const char *regbol; regmatch_t *pmatch; int nmatch; } regexp; typedef regexp regex_t; #define REG_EXTENDED 0 #define REG_NEWLINE 1 #define REG_ICASE 2 #define REG_NOTBOL 16 enum { REG_NOERROR, REG_NOMATCH, REG_BADPAT, REG_ERR_NULL_ARGUMENT, REG_ERR_UNKNOWN, REG_ERR_TOO_BIG, REG_ERR_NOMEM, REG_ERR_TOO_MANY_PAREN, REG_ERR_UNMATCHED_PAREN, REG_ERR_UNMATCHED_BRACES, |
︙ | ︙ | |||
1035 1036 1037 1038 1039 1040 1041 | { if (Jim_PackageProvide(interp, "bootstrap", "1.0", JIM_ERRMSG)) return JIM_ERR; return Jim_EvalSource(interp, "bootstrap.tcl", 1, "\n" "\n" | | > > > > > > > > > | 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 | { if (Jim_PackageProvide(interp, "bootstrap", "1.0", JIM_ERRMSG)) return JIM_ERR; return Jim_EvalSource(interp, "bootstrap.tcl", 1, "\n" "\n" "proc package {cmd pkg} {\n" " if {$cmd eq \"require\"} {\n" " foreach path $::auto_path {\n" " if {[file exists $path/$pkg.tcl]} {\n" " uplevel #0 [list source $path/$pkg.tcl]\n" " return\n" " }\n" " }\n" " }\n" "}\n" ); } int Jim_initjimshInit(Jim_Interp *interp) { if (Jim_PackageProvide(interp, "initjimsh", "1.0", JIM_ERRMSG)) return JIM_ERR; |
︙ | ︙ | |||
1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 | " }\n" " file delete $path\n" "}\n" ); } #include <stdio.h> #include <string.h> #include <errno.h> #include <fcntl.h> #ifdef HAVE_UNISTD_H #include <unistd.h> #include <sys/stat.h> | > | 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 | " }\n" " file delete $path\n" "}\n" ); } #define _GNU_SOURCE #include <stdio.h> #include <string.h> #include <errno.h> #include <fcntl.h> #ifdef HAVE_UNISTD_H #include <unistd.h> #include <sys/stat.h> |
︙ | ︙ | |||
1791 1792 1793 1794 1795 1796 1797 1798 | #ifdef HAVE_SYS_UN_H #include <sys/un.h> #endif #else #define JIM_ANSIC #endif | > > > > > | | > > > > > > > > > > > > > | > > < < | | | < < < < | < < | < | < < | < | < < | < < | < | | < < | < | < > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 | #ifdef HAVE_SYS_UN_H #include <sys/un.h> #endif #else #define JIM_ANSIC #endif #if defined(JIM_SSL) #include <openssl/ssl.h> #include <openssl/err.h> #endif #define AIO_CMD_LEN 32 #define AIO_BUF_LEN 256 #ifndef HAVE_FTELLO #define ftello ftell #endif #ifndef HAVE_FSEEKO #define fseeko fseek #endif #define AIO_KEEPOPEN 1 #if defined(JIM_IPV6) #define IPV6 1 #else #define IPV6 0 #ifndef PF_INET6 #define PF_INET6 0 #endif #endif #define JimCheckStreamError(interp, af) af->fops->error(af) struct AioFile; typedef struct { int (*writer)(struct AioFile *af, const char *buf, int len); int (*reader)(struct AioFile *af, char *buf, int len); const char *(*getline)(struct AioFile *af, char *buf, int len); int (*error)(const struct AioFile *af); const char *(*strerror)(struct AioFile *af); int (*verify)(struct AioFile *af); } JimAioFopsType; typedef struct AioFile { FILE *fp; Jim_Obj *filename; int type; int openFlags; int fd; Jim_Obj *rEvent; Jim_Obj *wEvent; Jim_Obj *eEvent; int addr_family; void *ssl; const JimAioFopsType *fops; } AioFile; static int stdio_writer(struct AioFile *af, const char *buf, int len) { return fwrite(buf, 1, len, af->fp); } static int stdio_reader(struct AioFile *af, char *buf, int len) { return fread(buf, 1, len, af->fp); } static const char *stdio_getline(struct AioFile *af, char *buf, int len) { return fgets(buf, len, af->fp); } static int stdio_error(const AioFile *af) { if (!ferror(af->fp)) { return JIM_OK; } clearerr(af->fp); if (feof(af->fp) || errno == EAGAIN || errno == EINTR) { return JIM_OK; } #ifdef ECONNRESET if (errno == ECONNRESET) { return JIM_OK; } #endif #ifdef ECONNABORTED if (errno != ECONNABORTED) { return JIM_OK; } #endif return JIM_ERR; } static const char *stdio_strerror(struct AioFile *af) { return strerror(errno); } static const JimAioFopsType stdio_fops = { stdio_writer, stdio_reader, stdio_getline, stdio_error, stdio_strerror, NULL }; static int JimAioSubCmdProc(Jim_Interp *interp, int argc, Jim_Obj *const *argv); static AioFile *JimMakeChannel(Jim_Interp *interp, FILE *fh, int fd, Jim_Obj *filename, const char *hdlfmt, int family, const char *mode); static const char *JimAioErrorString(AioFile *af) { if (af && af->fops) return af->fops->strerror(af); return strerror(errno); } static void JimAioSetError(Jim_Interp *interp, Jim_Obj *name) { AioFile *af = Jim_CmdPrivData(interp); if (name) { Jim_SetResultFormatted(interp, "%#s: %s", name, JimAioErrorString(af)); } else { Jim_SetResultString(interp, JimAioErrorString(af), -1); } } static void JimAioDelProc(Jim_Interp *interp, void *privData) { AioFile *af = privData; JIM_NOTUSED(interp); Jim_DecrRefCount(interp, af->filename); #ifdef jim_ext_eventloop Jim_DeleteFileHandler(interp, af->fd, JIM_EVENT_READABLE | JIM_EVENT_WRITABLE | JIM_EVENT_EXCEPTION); #endif #if defined(JIM_SSL) if (af->ssl != NULL) { SSL_free(af->ssl); } #endif if (!(af->openFlags & AIO_KEEPOPEN)) { fclose(af->fp); } Jim_Free(af); } static int aio_cmd_read(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { AioFile *af = Jim_CmdPrivData(interp); char buf[AIO_BUF_LEN]; Jim_Obj *objPtr; int nonewline = 0; jim_wide neededLen = -1; if (argc && Jim_CompareStringImmediate(interp, argv[0], "-nonewline")) { nonewline = 1; argv++; argc--; } if (argc == 1) { |
︙ | ︙ | |||
1921 1922 1923 1924 1925 1926 1927 | if (neededLen == -1) { readlen = AIO_BUF_LEN; } else { readlen = (neededLen > AIO_BUF_LEN ? AIO_BUF_LEN : neededLen); } | | | > > > > > > > > > > > > > > > > > > > > > > > > | | > | > > | < > < < < < < | < < < | > | > | | | 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 | if (neededLen == -1) { readlen = AIO_BUF_LEN; } else { readlen = (neededLen > AIO_BUF_LEN ? AIO_BUF_LEN : neededLen); } retval = af->fops->reader(af, buf, readlen); if (retval > 0) { Jim_AppendString(interp, objPtr, buf, retval); if (neededLen != -1) { neededLen -= retval; } } if (retval != readlen) break; } if (JimCheckStreamError(interp, af)) { Jim_FreeNewObj(interp, objPtr); return JIM_ERR; } if (nonewline) { int len; const char *s = Jim_GetString(objPtr, &len); if (len > 0 && s[len - 1] == '\n') { objPtr->length--; objPtr->bytes[objPtr->length] = '\0'; } } Jim_SetResult(interp, objPtr); return JIM_OK; } AioFile *Jim_AioFile(Jim_Interp *interp, Jim_Obj *command) { Jim_Cmd *cmdPtr = Jim_GetCommand(interp, command, JIM_ERRMSG); if (cmdPtr && !cmdPtr->isproc && cmdPtr->u.native.cmdProc == JimAioSubCmdProc) { return (AioFile *) cmdPtr->u.native.privData; } Jim_SetResultFormatted(interp, "Not a filehandle: \"%#s\"", command); return NULL; } FILE *Jim_AioFilehandle(Jim_Interp *interp, Jim_Obj *command) { AioFile *af; af = Jim_AioFile(interp, command); if (af == NULL) { return NULL; } return af->fp; } static int aio_cmd_copy(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { AioFile *af = Jim_CmdPrivData(interp); jim_wide count = 0; jim_wide maxlen = JIM_WIDE_MAX; AioFile *outf = Jim_AioFile(interp, argv[0]); if (outf == NULL) { return JIM_ERR; } if (argc == 2) { if (Jim_GetWide(interp, argv[1], &maxlen) != JIM_OK) { return JIM_ERR; } } while (count < maxlen) { char ch; if (af->fops->reader(af, &ch, 1) != 1) { break; } if (outf->fops->writer(outf, &ch, 1) != 1) { break; } count++; } if (JimCheckStreamError(interp, af) || JimCheckStreamError(interp, outf)) { return JIM_ERR; } Jim_SetResultInt(interp, count); return JIM_OK; } static int aio_cmd_gets(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { AioFile *af = Jim_CmdPrivData(interp); char buf[AIO_BUF_LEN]; Jim_Obj *objPtr; int len; errno = 0; objPtr = Jim_NewStringObj(interp, NULL, 0); while (1) { buf[AIO_BUF_LEN - 1] = '_'; if (af->fops->getline(af, buf, AIO_BUF_LEN) == NULL) break; if (buf[AIO_BUF_LEN - 1] == '\0' && buf[AIO_BUF_LEN - 2] != '\n') { Jim_AppendString(interp, objPtr, buf, AIO_BUF_LEN - 1); } else { len = strlen(buf); if (len && (buf[len - 1] == '\n')) { len--; } Jim_AppendString(interp, objPtr, buf, len); break; } } if (JimCheckStreamError(interp, af)) { Jim_FreeNewObj(interp, objPtr); return JIM_ERR; } if (argc) { if (Jim_SetVariable(interp, argv[0], objPtr) != JIM_OK) { Jim_FreeNewObj(interp, objPtr); return JIM_ERR; } len = Jim_Length(objPtr); if (len == 0 && feof(af->fp)) { len = -1; } Jim_SetResultInt(interp, len); } else { Jim_SetResult(interp, objPtr); } |
︙ | ︙ | |||
2066 2067 2068 2069 2070 2071 2072 | strObj = argv[1]; } else { strObj = argv[0]; } wdata = Jim_GetString(strObj, &wlen); | | | | 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 | strObj = argv[1]; } else { strObj = argv[0]; } wdata = Jim_GetString(strObj, &wlen); if (af->fops->writer(af, wdata, wlen) == wlen) { if (argc == 2 || af->fops->writer(af, "\n", 1) == 1) { return JIM_OK; } } JimAioSetError(interp, af->filename); return JIM_ERR; } |
︙ | ︙ | |||
2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 | } (void)fcntl(af->fd, F_SETFL, fmode); } Jim_SetResultInt(interp, (fmode & O_NONBLOCK) ? 1 : 0); return JIM_OK; } #endif static int aio_cmd_buffering(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { AioFile *af = Jim_CmdPrivData(interp); static const char * const options[] = { "none", | > > > > > > > > > > > | 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327 2328 | } (void)fcntl(af->fd, F_SETFL, fmode); } Jim_SetResultInt(interp, (fmode & O_NONBLOCK) ? 1 : 0); return JIM_OK; } #endif #ifdef HAVE_FSYNC static int aio_cmd_sync(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { AioFile *af = Jim_CmdPrivData(interp); fflush(af->fp); fsync(af->fd); return JIM_OK; } #endif static int aio_cmd_buffering(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { AioFile *af = Jim_CmdPrivData(interp); static const char * const options[] = { "none", |
︙ | ︙ | |||
2256 2257 2258 2259 2260 2261 2262 | return Jim_EvalObjBackground(interp, *objPtrPtr); } static int aio_eventinfo(Jim_Interp *interp, AioFile * af, unsigned mask, Jim_Obj **scriptHandlerObj, int argc, Jim_Obj * const *argv) { if (argc == 0) { | | | | | | | | | 2371 2372 2373 2374 2375 2376 2377 2378 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 | return Jim_EvalObjBackground(interp, *objPtrPtr); } static int aio_eventinfo(Jim_Interp *interp, AioFile * af, unsigned mask, Jim_Obj **scriptHandlerObj, int argc, Jim_Obj * const *argv) { if (argc == 0) { if (*scriptHandlerObj) { Jim_SetResult(interp, *scriptHandlerObj); } return JIM_OK; } if (*scriptHandlerObj) { Jim_DeleteFileHandler(interp, af->fd, mask); } if (Jim_Length(argv[0]) == 0) { return JIM_OK; } Jim_IncrRefCount(argv[0]); *scriptHandlerObj = argv[0]; Jim_CreateFileHandler(interp, af->fd, mask, JimAioFileEventHandler, scriptHandlerObj, JimAioFileEventFinalizer); return JIM_OK; } static int aio_cmd_readable(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { |
︙ | ︙ | |||
2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 | static int aio_cmd_onexception(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { AioFile *af = Jim_CmdPrivData(interp); return aio_eventinfo(interp, af, JIM_EVENT_EXCEPTION, &af->eEvent, argc, argv); } #endif static const jim_subcmd_type aio_command_table[] = { { "read", "?-nonewline? ?len?", aio_cmd_read, 0, 2, | > > | | | | | | | | | | | | > > > > > > > > > | | | | | 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 | static int aio_cmd_onexception(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { AioFile *af = Jim_CmdPrivData(interp); return aio_eventinfo(interp, af, JIM_EVENT_EXCEPTION, &af->eEvent, argc, argv); } #endif static const jim_subcmd_type aio_command_table[] = { { "read", "?-nonewline? ?len?", aio_cmd_read, 0, 2, }, { "copyto", "handle ?size?", aio_cmd_copy, 1, 2, }, { "gets", "?var?", aio_cmd_gets, 0, 1, }, { "puts", "?-nonewline? str", aio_cmd_puts, 1, 2, }, { "isatty", NULL, aio_cmd_isatty, 0, 0, }, { "flush", NULL, aio_cmd_flush, 0, 0, }, { "eof", NULL, aio_cmd_eof, 0, 0, }, { "close", "?r(ead)|w(rite)?", aio_cmd_close, 0, 1, JIM_MODFLAG_FULLARGV, }, { "seek", "offset ?start|current|end", aio_cmd_seek, 1, 2, }, { "tell", NULL, aio_cmd_tell, 0, 0, }, { "filename", NULL, aio_cmd_filename, 0, 0, }, #ifdef O_NDELAY { "ndelay", "?0|1?", aio_cmd_ndelay, 0, 1, }, #endif #ifdef HAVE_FSYNC { "sync", NULL, aio_cmd_sync, 0, 0, }, #endif { "buffering", "none|line|full", aio_cmd_buffering, 1, 1, }, #ifdef jim_ext_eventloop { "readable", "?readable-script?", aio_cmd_readable, 0, 1, }, { "writable", "?writable-script?", aio_cmd_writable, 0, 1, }, { "onexception", "?exception-script?", aio_cmd_onexception, 0, 1, }, #endif { NULL } }; static int JimAioSubCmdProc(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { |
︙ | ︙ | |||
2448 2449 2450 2451 2452 2453 2454 | mode = (argc == 3) ? Jim_String(argv[2]) : "r"; #ifdef jim_ext_tclcompat { const char *filename = Jim_String(argv[1]); | | | > | > > < > > > > > | | > > | | | | | | 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 2619 2620 2621 2622 2623 2624 2625 2626 2627 2628 2629 2630 2631 2632 2633 2634 2635 2636 2637 2638 2639 2640 2641 2642 2643 2644 2645 2646 2647 2648 2649 2650 2651 2652 2653 2654 2655 2656 2657 2658 2659 2660 2661 2662 2663 2664 2665 2666 2667 2668 2669 2670 2671 2672 2673 2674 2675 2676 2677 2678 2679 2680 2681 2682 2683 | mode = (argc == 3) ? Jim_String(argv[2]) : "r"; #ifdef jim_ext_tclcompat { const char *filename = Jim_String(argv[1]); if (*filename == '|') { Jim_Obj *evalObj[3]; evalObj[0] = Jim_NewStringObj(interp, "::popen", -1); evalObj[1] = Jim_NewStringObj(interp, filename + 1, -1); evalObj[2] = Jim_NewStringObj(interp, mode, -1); return Jim_EvalObjVector(interp, 3, evalObj); } } #endif return JimMakeChannel(interp, NULL, -1, argv[1], "aio.handle%ld", 0, mode) ? JIM_OK : JIM_ERR; } static AioFile *JimMakeChannel(Jim_Interp *interp, FILE *fh, int fd, Jim_Obj *filename, const char *hdlfmt, int family, const char *mode) { AioFile *af; char buf[AIO_CMD_LEN]; int openFlags = 0; snprintf(buf, sizeof(buf), hdlfmt, Jim_GetId(interp)); if (fh) { openFlags = AIO_KEEPOPEN; } snprintf(buf, sizeof(buf), hdlfmt, Jim_GetId(interp)); if (!filename) { filename = Jim_NewStringObj(interp, buf, -1); } Jim_IncrRefCount(filename); if (fh == NULL) { #if !defined(JIM_ANSIC) if (fd >= 0) { fh = fdopen(fd, mode); } else #endif fh = fopen(Jim_String(filename), mode); if (fh == NULL) { JimAioSetError(interp, filename); #if !defined(JIM_ANSIC) if (fd >= 0) { close(fd); } #endif Jim_DecrRefCount(interp, filename); return NULL; } } af = Jim_Alloc(sizeof(*af)); memset(af, 0, sizeof(*af)); af->fp = fh; af->fd = fileno(fh); af->filename = filename; #ifdef FD_CLOEXEC if ((openFlags & AIO_KEEPOPEN) == 0) { (void)fcntl(af->fd, F_SETFD, FD_CLOEXEC); } #endif af->openFlags = openFlags; af->addr_family = family; af->fops = &stdio_fops; af->ssl = NULL; Jim_CreateCommand(interp, buf, JimAioSubCmdProc, af, JimAioDelProc); Jim_SetResult(interp, Jim_MakeGlobalNamespaceName(interp, Jim_NewStringObj(interp, buf, -1))); return af; } #if defined(HAVE_PIPE) || (defined(HAVE_SOCKETPAIR) && defined(HAVE_SYS_UN_H)) static int JimMakeChannelPair(Jim_Interp *interp, int p[2], Jim_Obj *filename, const char *hdlfmt, int family, const char *mode[2]) { if (JimMakeChannel(interp, NULL, p[0], filename, hdlfmt, family, mode[0])) { Jim_Obj *objPtr = Jim_NewListObj(interp, NULL, 0); Jim_ListAppendElement(interp, objPtr, Jim_GetResult(interp)); if (JimMakeChannel(interp, NULL, p[1], filename, hdlfmt, family, mode[1])) { Jim_ListAppendElement(interp, objPtr, Jim_GetResult(interp)); Jim_SetResult(interp, objPtr); return JIM_OK; } } close(p[0]); close(p[1]); JimAioSetError(interp, NULL); return JIM_ERR; } #endif |
︙ | ︙ | |||
2565 2566 2567 2568 2569 2570 2571 2572 2573 | } Jim_AppendString(interp, filenameObj, "tcl.tmp.XXXXXX", -1); } else { filenameObj = Jim_NewStringObj(interp, template, -1); } mask = umask(S_IXUSR | S_IRWXG | S_IRWXO); | > > > > | > < < < < < < < < < < < > > > > | | 2700 2701 2702 2703 2704 2705 2706 2707 2708 2709 2710 2711 2712 2713 2714 2715 2716 2717 2718 2719 2720 2721 2722 2723 2724 2725 2726 2727 2728 2729 2730 2731 2732 2733 2734 2735 2736 2737 2738 2739 2740 2741 2742 2743 2744 2745 2746 2747 2748 2749 2750 2751 2752 2753 | } Jim_AppendString(interp, filenameObj, "tcl.tmp.XXXXXX", -1); } else { filenameObj = Jim_NewStringObj(interp, template, -1); } #if defined(S_IRWXG) && defined(S_IRWXO) mask = umask(S_IXUSR | S_IRWXG | S_IRWXO); #else mask = umask(S_IXUSR); #endif fd = mkstemp(filenameObj->bytes); umask(mask); if (fd < 0) { JimAioSetError(interp, filenameObj); Jim_FreeNewObj(interp, filenameObj); return -1; } Jim_SetResult(interp, filenameObj); return fd; #else Jim_SetResultString(interp, "platform has no tempfile support", -1); return -1; #endif } int Jim_aioInit(Jim_Interp *interp) { if (Jim_PackageProvide(interp, "aio", "1.0", JIM_ERRMSG)) return JIM_ERR; #if defined(JIM_SSL) Jim_CreateCommand(interp, "load_ssl_certs", JimAioLoadSSLCertsCommand, NULL, NULL); #endif Jim_CreateCommand(interp, "open", JimAioOpenCommand, NULL, NULL); #ifndef JIM_ANSIC Jim_CreateCommand(interp, "socket", JimAioSockCommand, NULL, NULL); #endif JimMakeChannel(interp, stdin, -1, NULL, "stdin", 0, "r"); JimMakeChannel(interp, stdout, -1, NULL, "stdout", 0, "w"); JimMakeChannel(interp, stderr, -1, NULL, "stderr", 0, "w"); return JIM_OK; } |
︙ | ︙ | |||
2706 2707 2708 2709 2710 2711 2712 | static regex_t *SetRegexpFromAny(Jim_Interp *interp, Jim_Obj *objPtr, unsigned flags) { regex_t *compre; const char *pattern; int ret; | | | | | | 2839 2840 2841 2842 2843 2844 2845 2846 2847 2848 2849 2850 2851 2852 2853 2854 2855 2856 2857 2858 2859 2860 2861 2862 | static regex_t *SetRegexpFromAny(Jim_Interp *interp, Jim_Obj *objPtr, unsigned flags) { regex_t *compre; const char *pattern; int ret; if (objPtr->typePtr == ®expObjType && objPtr->internalRep.regexpValue.compre && objPtr->internalRep.regexpValue.flags == flags) { return objPtr->internalRep.regexpValue.compre; } pattern = Jim_String(objPtr); compre = Jim_Alloc(sizeof(regex_t)); if ((ret = regcomp(compre, pattern, REG_EXTENDED | flags)) != 0) { char buf[100]; regerror(ret, compre, buf, sizeof(buf)); |
︙ | ︙ | |||
2767 2768 2769 2770 2771 2772 2773 | static const char * const options[] = { "-indices", "-nocase", "-line", "-all", "-inline", "-start", "--", NULL }; if (argc < 3) { wrongNumArgs: Jim_WrongNumArgs(interp, 1, argv, | | | 2900 2901 2902 2903 2904 2905 2906 2907 2908 2909 2910 2911 2912 2913 2914 | static const char * const options[] = { "-indices", "-nocase", "-line", "-all", "-inline", "-start", "--", NULL }; if (argc < 3) { wrongNumArgs: Jim_WrongNumArgs(interp, 1, argv, "?-switch ...? exp string ?matchVar? ?subMatchVar ...?"); return JIM_ERR; } for (i = 1; i < argc; i++) { const char *opt = Jim_String(argv[i]); if (*opt != '-') { |
︙ | ︙ | |||
2876 2877 2878 2879 2880 2881 2882 | if (match == REG_NOMATCH) { goto done; } num_matches++; if (opt_all && !opt_inline) { | | | 3009 3010 3011 3012 3013 3014 3015 3016 3017 3018 3019 3020 3021 3022 3023 | if (match == REG_NOMATCH) { goto done; } num_matches++; if (opt_all && !opt_inline) { goto try_next_match; } j = 0; for (i += 2; opt_inline ? j < num_vars : i < argc; i++, j++) { Jim_Obj *resultObj; |
︙ | ︙ | |||
2916 2917 2918 2919 2920 2921 2922 | } } if (opt_inline) { Jim_ListAppendElement(interp, resultListObj, resultObj); } else { | | | 3049 3050 3051 3052 3053 3054 3055 3056 3057 3058 3059 3060 3061 3062 3063 | } } if (opt_inline) { Jim_ListAppendElement(interp, resultListObj, resultObj); } else { result = Jim_SetVariable(interp, argv[i], resultObj); if (result != JIM_OK) { Jim_FreeObj(interp, resultObj); break; } } |
︙ | ︙ | |||
2989 2990 2991 2992 2993 2994 2995 | static const char * const options[] = { "-nocase", "-line", "-all", "-start", "--", NULL }; if (argc < 4) { wrongNumArgs: Jim_WrongNumArgs(interp, 1, argv, | | | 3122 3123 3124 3125 3126 3127 3128 3129 3130 3131 3132 3133 3134 3135 3136 | static const char * const options[] = { "-nocase", "-line", "-all", "-start", "--", NULL }; if (argc < 4) { wrongNumArgs: Jim_WrongNumArgs(interp, 1, argv, "?-switch ...? exp string subSpec ?varName?"); return JIM_ERR; } for (i = 1; i < argc; i++) { const char *opt = Jim_String(argv[i]); if (*opt != '-') { |
︙ | ︙ | |||
3043 3044 3045 3046 3047 3048 3049 | } pattern = Jim_String(argv[i]); source_str = Jim_GetString(argv[i + 1], &source_len); replace_str = Jim_GetString(argv[i + 2], &replace_len); varname = argv[i + 3]; | | | | 3176 3177 3178 3179 3180 3181 3182 3183 3184 3185 3186 3187 3188 3189 3190 3191 3192 3193 3194 3195 3196 3197 3198 3199 3200 3201 3202 3203 3204 3205 | } pattern = Jim_String(argv[i]); source_str = Jim_GetString(argv[i + 1], &source_len); replace_str = Jim_GetString(argv[i + 2], &replace_len); varname = argv[i + 3]; resultObj = Jim_NewStringObj(interp, "", 0); if (offset) { if (offset < 0) { offset += source_len + 1; } if (offset > source_len) { offset = source_len; } else if (offset < 0) { offset = 0; } } Jim_AppendString(interp, resultObj, source_str, offset); n = source_len - offset; p = source_str + offset; do { int match = regexec(regex, p, MAX_SUB_MATCHES, pmatch, regexec_flags); |
︙ | ︙ | |||
3100 3101 3102 3103 3104 3105 3106 | idx = c - '0'; } else if ((c == '\\') || (c == '&')) { Jim_AppendString(interp, resultObj, replace_str + j, 1); continue; } else { | | | | | | | | 3233 3234 3235 3236 3237 3238 3239 3240 3241 3242 3243 3244 3245 3246 3247 3248 3249 3250 3251 3252 3253 3254 3255 3256 3257 3258 3259 3260 3261 3262 3263 3264 3265 3266 3267 3268 3269 3270 3271 3272 3273 3274 3275 3276 3277 3278 3279 3280 3281 3282 3283 3284 3285 3286 3287 | idx = c - '0'; } else if ((c == '\\') || (c == '&')) { Jim_AppendString(interp, resultObj, replace_str + j, 1); continue; } else { Jim_AppendString(interp, resultObj, replace_str + j - 1, (j == replace_len) ? 1 : 2); continue; } } else { Jim_AppendString(interp, resultObj, replace_str + j, 1); continue; } if ((idx < MAX_SUB_MATCHES) && pmatch[idx].rm_so != -1 && pmatch[idx].rm_eo != -1) { Jim_AppendString(interp, resultObj, p + pmatch[idx].rm_so, pmatch[idx].rm_eo - pmatch[idx].rm_so); } } p += pmatch[0].rm_eo; n -= pmatch[0].rm_eo; if (!opt_all || n == 0) { break; } if ((regcomp_flags & REG_NEWLINE) == 0 && pattern[0] == '^') { break; } if (pattern[0] == '\0' && n) { Jim_AppendString(interp, resultObj, p, 1); p++; n--; } regexec_flags |= REG_NOTBOL; } while (n); Jim_AppendString(interp, resultObj, p, -1); if (argc - i == 4) { result = Jim_SetVariable(interp, varname, resultObj); if (result == JIM_OK) { Jim_SetResultInt(interp, num_matches); } else { |
︙ | ︙ | |||
3246 3247 3248 3249 3250 3251 3252 | { Jim_ListAppendElement(interp, listObj, Jim_NewStringObj(interp, key, -1)); Jim_ListAppendElement(interp, listObj, Jim_NewIntObj(interp, value)); } static int StoreStatData(Jim_Interp *interp, Jim_Obj *varName, const struct stat *sb) { | | | | | | | | 3379 3380 3381 3382 3383 3384 3385 3386 3387 3388 3389 3390 3391 3392 3393 3394 3395 3396 3397 3398 3399 3400 3401 3402 3403 3404 3405 3406 3407 3408 3409 3410 3411 3412 3413 3414 3415 3416 3417 3418 3419 3420 3421 3422 3423 3424 3425 3426 3427 3428 3429 3430 3431 3432 3433 3434 3435 3436 3437 3438 3439 3440 3441 3442 3443 3444 3445 3446 3447 3448 3449 3450 3451 3452 3453 3454 | { Jim_ListAppendElement(interp, listObj, Jim_NewStringObj(interp, key, -1)); Jim_ListAppendElement(interp, listObj, Jim_NewIntObj(interp, value)); } static int StoreStatData(Jim_Interp *interp, Jim_Obj *varName, const struct stat *sb) { Jim_Obj *listObj = Jim_NewListObj(interp, NULL, 0); AppendStatElement(interp, listObj, "dev", sb->st_dev); AppendStatElement(interp, listObj, "ino", sb->st_ino); AppendStatElement(interp, listObj, "mode", sb->st_mode); AppendStatElement(interp, listObj, "nlink", sb->st_nlink); AppendStatElement(interp, listObj, "uid", sb->st_uid); AppendStatElement(interp, listObj, "gid", sb->st_gid); AppendStatElement(interp, listObj, "size", sb->st_size); AppendStatElement(interp, listObj, "atime", sb->st_atime); AppendStatElement(interp, listObj, "mtime", sb->st_mtime); AppendStatElement(interp, listObj, "ctime", sb->st_ctime); Jim_ListAppendElement(interp, listObj, Jim_NewStringObj(interp, "type", -1)); Jim_ListAppendElement(interp, listObj, Jim_NewStringObj(interp, JimGetFileType((int)sb->st_mode), -1)); if (varName) { Jim_Obj *objPtr = Jim_GetVariable(interp, varName, JIM_NONE); if (objPtr) { if (Jim_DictSize(interp, objPtr) < 0) { Jim_SetResultFormatted(interp, "can't set \"%#s(dev)\": variable isn't array", varName); Jim_FreeNewObj(interp, listObj); return JIM_ERR; } if (Jim_IsShared(objPtr)) objPtr = Jim_DuplicateObj(interp, objPtr); Jim_ListAppendList(interp, objPtr, listObj); Jim_DictSize(interp, objPtr); Jim_InvalidateStringRep(objPtr); Jim_FreeNewObj(interp, listObj); listObj = objPtr; } Jim_SetVariable(interp, varName, listObj); } Jim_SetResult(interp, listObj); return JIM_OK; } static int file_cmd_dirname(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { const char *path = Jim_String(argv[0]); const char *p = strrchr(path, '/'); if (!p && path[0] == '.' && path[1] == '.' && path[2] == '\0') { Jim_SetResultString(interp, "..", -1); } else if (!p) { Jim_SetResultString(interp, ".", -1); } else if (p == path) { Jim_SetResultString(interp, "/", -1); } else if (ISWINDOWS && p[-1] == ':') { Jim_SetResultString(interp, path, p - path + 1); } else { Jim_SetResultString(interp, path, p - path); } return JIM_OK; } |
︙ | ︙ | |||
3387 3388 3389 3390 3391 3392 3393 | { int i; char *newname = Jim_Alloc(MAXPATHLEN + 1); char *last = newname; *newname = 0; | | | | | | | | | | 3520 3521 3522 3523 3524 3525 3526 3527 3528 3529 3530 3531 3532 3533 3534 3535 3536 3537 3538 3539 3540 3541 3542 3543 3544 3545 3546 3547 3548 3549 3550 3551 3552 3553 3554 3555 3556 3557 3558 3559 3560 3561 3562 3563 3564 3565 3566 3567 3568 3569 3570 3571 3572 3573 3574 3575 3576 3577 3578 3579 3580 3581 3582 3583 3584 | { int i; char *newname = Jim_Alloc(MAXPATHLEN + 1); char *last = newname; *newname = 0; for (i = 0; i < argc; i++) { int len; const char *part = Jim_GetString(argv[i], &len); if (*part == '/') { last = newname; } else if (ISWINDOWS && strchr(part, ':')) { last = newname; } else if (part[0] == '.') { if (part[1] == '/') { part += 2; len -= 2; } else if (part[1] == 0 && last != newname) { continue; } } if (last != newname && last[-1] != '/') { *last++ = '/'; } if (len) { if (last + len - newname >= MAXPATHLEN) { Jim_Free(newname); Jim_SetResultString(interp, "Path too long", -1); return JIM_ERR; } memcpy(last, part, len); last += len; } if (last > newname + 1 && last[-1] == '/') { if (!ISWINDOWS || !(last > newname + 2 && last[-2] == ':')) { *--last = 0; } } } *last = 0; Jim_SetResult(interp, Jim_NewStringObjNoAlloc(interp, newname, last - newname)); return JIM_OK; } static int file_access(Jim_Interp *interp, Jim_Obj *filename, int mode) |
︙ | ︙ | |||
3466 3467 3468 3469 3470 3471 3472 | } static int file_cmd_executable(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { #ifdef X_OK return file_access(interp, argv[0], X_OK); #else | | | 3599 3600 3601 3602 3603 3604 3605 3606 3607 3608 3609 3610 3611 3612 3613 | } static int file_cmd_executable(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { #ifdef X_OK return file_access(interp, argv[0], X_OK); #else Jim_SetResultBool(interp, 1); return JIM_OK; #endif } static int file_cmd_exists(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { |
︙ | ︙ | |||
3491 3492 3493 3494 3495 3496 3497 | } while (argc--) { const char *path = Jim_String(argv[0]); if (unlink(path) == -1 && errno != ENOENT) { if (rmdir(path) == -1) { | | | 3624 3625 3626 3627 3628 3629 3630 3631 3632 3633 3634 3635 3636 3637 3638 | } while (argc--) { const char *path = Jim_String(argv[0]); if (unlink(path) == -1 && errno != ENOENT) { if (rmdir(path) == -1) { if (!force || Jim_EvalPrefix(interp, "file delete force", 1, argv) != JIM_OK) { Jim_SetResultFormatted(interp, "couldn't delete file \"%s\": %s", path, strerror(errno)); return JIM_ERR; } } } |
︙ | ︙ | |||
3514 3515 3516 3517 3518 3519 3520 | #define MKDIR_DEFAULT(PATHNAME) mkdir(PATHNAME, 0755) #endif static int mkdir_all(char *path) { int ok = 1; | | | | | | | | 3647 3648 3649 3650 3651 3652 3653 3654 3655 3656 3657 3658 3659 3660 3661 3662 3663 3664 3665 3666 3667 3668 3669 3670 3671 3672 3673 3674 3675 3676 3677 3678 3679 3680 3681 3682 3683 3684 3685 3686 3687 3688 3689 3690 3691 3692 3693 3694 3695 | #define MKDIR_DEFAULT(PATHNAME) mkdir(PATHNAME, 0755) #endif static int mkdir_all(char *path) { int ok = 1; goto first; while (ok--) { { char *slash = strrchr(path, '/'); if (slash && slash != path) { *slash = 0; if (mkdir_all(path) != 0) { return -1; } *slash = '/'; } } first: if (MKDIR_DEFAULT(path) == 0) { return 0; } if (errno == ENOENT) { continue; } if (errno == EEXIST) { struct stat sb; if (stat(path, &sb) == 0 && S_ISDIR(sb.st_mode)) { return 0; } errno = EEXIST; } break; } return -1; } static int file_cmd_mkdir(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { |
︙ | ︙ | |||
3837 3838 3839 3840 3841 3842 3843 | static const jim_subcmd_type file_command_table[] = { { "atime", "name", file_cmd_atime, 1, 1, | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3970 3971 3972 3973 3974 3975 3976 3977 3978 3979 3980 3981 3982 3983 3984 3985 3986 3987 3988 3989 3990 3991 3992 3993 3994 3995 3996 3997 3998 3999 4000 4001 4002 4003 4004 4005 4006 4007 4008 4009 4010 4011 4012 4013 4014 4015 4016 4017 4018 4019 4020 4021 4022 4023 4024 4025 4026 4027 4028 4029 4030 4031 4032 4033 4034 4035 4036 4037 4038 4039 4040 4041 4042 4043 4044 4045 4046 4047 4048 4049 4050 4051 4052 4053 4054 4055 4056 4057 4058 4059 4060 4061 4062 4063 4064 4065 4066 4067 4068 4069 4070 4071 4072 4073 4074 4075 4076 4077 4078 4079 4080 4081 4082 4083 4084 4085 4086 4087 4088 4089 4090 4091 4092 4093 4094 4095 4096 4097 4098 4099 4100 4101 4102 4103 4104 4105 4106 4107 4108 4109 4110 4111 4112 4113 4114 4115 4116 4117 4118 4119 4120 4121 4122 4123 4124 4125 4126 4127 4128 4129 4130 4131 4132 4133 4134 4135 4136 4137 4138 4139 4140 4141 4142 4143 4144 4145 4146 4147 4148 4149 4150 4151 4152 4153 4154 4155 4156 4157 4158 4159 4160 4161 4162 4163 4164 4165 | static const jim_subcmd_type file_command_table[] = { { "atime", "name", file_cmd_atime, 1, 1, }, { "mtime", "name ?time?", file_cmd_mtime, 1, 2, }, { "copy", "?-force? source dest", file_cmd_copy, 2, 3, }, { "dirname", "name", file_cmd_dirname, 1, 1, }, { "rootname", "name", file_cmd_rootname, 1, 1, }, { "extension", "name", file_cmd_extension, 1, 1, }, { "tail", "name", file_cmd_tail, 1, 1, }, { "normalize", "name", file_cmd_normalize, 1, 1, }, { "join", "name ?name ...?", file_cmd_join, 1, -1, }, { "readable", "name", file_cmd_readable, 1, 1, }, { "writable", "name", file_cmd_writable, 1, 1, }, { "executable", "name", file_cmd_executable, 1, 1, }, { "exists", "name", file_cmd_exists, 1, 1, }, { "delete", "?-force|--? name ...", file_cmd_delete, 1, -1, }, { "mkdir", "dir ...", file_cmd_mkdir, 1, -1, }, { "tempfile", "?template?", file_cmd_tempfile, 0, 1, }, { "rename", "?-force? source dest", file_cmd_rename, 2, 3, }, #if defined(HAVE_LINK) && defined(HAVE_SYMLINK) { "link", "?-symbolic|-hard? newname target", file_cmd_link, 2, 3, }, #endif #if defined(HAVE_READLINK) { "readlink", "name", file_cmd_readlink, 1, 1, }, #endif { "size", "name", file_cmd_size, 1, 1, }, { "stat", "name ?var?", file_cmd_stat, 1, 2, }, { "lstat", "name ?var?", file_cmd_lstat, 1, 2, }, { "type", "name", file_cmd_type, 1, 1, }, #ifdef HAVE_GETEUID { "owned", "name", file_cmd_owned, 1, 1, }, #endif { "isdirectory", "name", file_cmd_isdirectory, 1, 1, }, { "isfile", "name", file_cmd_isfile, 1, 1, }, { NULL } }; static int Jim_CdCmd(Jim_Interp *interp, int argc, Jim_Obj *const *argv) |
︙ | ︙ | |||
4054 4055 4056 4057 4058 4059 4060 | if (getcwd(cwd, MAXPATHLEN) == NULL) { Jim_SetResultString(interp, "Failed to get pwd", -1); Jim_Free(cwd); return JIM_ERR; } else if (ISWINDOWS) { | | | 4187 4188 4189 4190 4191 4192 4193 4194 4195 4196 4197 4198 4199 4200 4201 | if (getcwd(cwd, MAXPATHLEN) == NULL) { Jim_SetResultString(interp, "Failed to get pwd", -1); Jim_Free(cwd); return JIM_ERR; } else if (ISWINDOWS) { char *p = cwd; while ((p = strchr(p, '\\')) != NULL) { *p++ = '/'; } } Jim_SetResultString(interp, cwd, -1); |
︙ | ︙ | |||
4078 4079 4080 4081 4082 4083 4084 4085 4086 4087 4088 4089 4090 4091 4092 4093 4094 4095 | Jim_CreateCommand(interp, "file", Jim_SubCmdProc, (void *)file_command_table, NULL); Jim_CreateCommand(interp, "pwd", Jim_PwdCmd, NULL, NULL); Jim_CreateCommand(interp, "cd", Jim_CdCmd, NULL, NULL); return JIM_OK; } #include <string.h> #include <ctype.h> #if (!defined(HAVE_VFORK) || !defined(HAVE_WAITPID)) && !defined(__MINGW32__) static int Jim_ExecCmd(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { Jim_Obj *cmdlineObj = Jim_NewEmptyStringObj(interp); int i, j; int rc; | > | | | 4211 4212 4213 4214 4215 4216 4217 4218 4219 4220 4221 4222 4223 4224 4225 4226 4227 4228 4229 4230 4231 4232 4233 4234 4235 4236 4237 4238 4239 4240 4241 4242 4243 4244 4245 4246 | Jim_CreateCommand(interp, "file", Jim_SubCmdProc, (void *)file_command_table, NULL); Jim_CreateCommand(interp, "pwd", Jim_PwdCmd, NULL, NULL); Jim_CreateCommand(interp, "cd", Jim_CdCmd, NULL, NULL); return JIM_OK; } #define _GNU_SOURCE #include <string.h> #include <ctype.h> #if (!defined(HAVE_VFORK) || !defined(HAVE_WAITPID)) && !defined(__MINGW32__) static int Jim_ExecCmd(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { Jim_Obj *cmdlineObj = Jim_NewEmptyStringObj(interp); int i, j; int rc; for (i = 1; i < argc; i++) { int len; const char *arg = Jim_GetString(argv[i], &len); if (i > 1) { Jim_AppendString(interp, cmdlineObj, " ", 1); } if (strpbrk(arg, "\\\" ") == NULL) { Jim_AppendString(interp, cmdlineObj, arg, len); continue; } Jim_AppendString(interp, cmdlineObj, "\"", 1); for (j = 0; j < len; j++) { if (arg[j] == '\\' || arg[j] == '"') { |
︙ | ︙ | |||
4143 4144 4145 4146 4147 4148 4149 | #else #include <errno.h> #include <signal.h> #if defined(__MINGW32__) | | | 4277 4278 4279 4280 4281 4282 4283 4284 4285 4286 4287 4288 4289 4290 4291 | #else #include <errno.h> #include <signal.h> #if defined(__MINGW32__) #ifndef STRICT #define STRICT #endif #define WIN32_LEAN_AND_MEAN #include <windows.h> #include <fcntl.h> |
︙ | ︙ | |||
4203 4204 4205 4206 4207 4208 4209 | static const char *JimStrError(void); static char **JimSaveEnv(char **env); static void JimRestoreEnv(char **env); static int JimCreatePipeline(Jim_Interp *interp, int argc, Jim_Obj *const *argv, pidtype **pidArrayPtr, fdtype *inPipePtr, fdtype *outPipePtr, fdtype *errFilePtr); static void JimDetachPids(Jim_Interp *interp, int numPids, const pidtype *pidPtr); | | | 4337 4338 4339 4340 4341 4342 4343 4344 4345 4346 4347 4348 4349 4350 4351 | static const char *JimStrError(void); static char **JimSaveEnv(char **env); static void JimRestoreEnv(char **env); static int JimCreatePipeline(Jim_Interp *interp, int argc, Jim_Obj *const *argv, pidtype **pidArrayPtr, fdtype *inPipePtr, fdtype *outPipePtr, fdtype *errFilePtr); static void JimDetachPids(Jim_Interp *interp, int numPids, const pidtype *pidPtr); static int JimCleanupChildren(Jim_Interp *interp, int numPids, pidtype *pidPtr, Jim_Obj *errStrObj); static fdtype JimCreateTemp(Jim_Interp *interp, const char *contents, int len); static fdtype JimOpenForWrite(const char *filename, int append); static int JimRewindFd(fdtype fd); static void Jim_SetResultErrno(Jim_Interp *interp, const char *msg) { Jim_SetResultFormatted(interp, "%s: %s", msg, JimStrError()); |
︙ | ︙ | |||
4233 4234 4235 4236 4237 4238 4239 4240 | } } static int JimAppendStreamToString(Jim_Interp *interp, fdtype fd, Jim_Obj *strObj) { char buf[256]; FILE *fh = JimFdOpenForRead(fd); if (fh == NULL) { | > > | > < | | | | 4367 4368 4369 4370 4371 4372 4373 4374 4375 4376 4377 4378 4379 4380 4381 4382 4383 4384 4385 4386 4387 4388 4389 4390 4391 4392 4393 4394 4395 4396 4397 4398 4399 4400 4401 4402 4403 4404 4405 4406 4407 4408 4409 4410 4411 4412 4413 4414 4415 4416 4417 4418 4419 4420 | } } static int JimAppendStreamToString(Jim_Interp *interp, fdtype fd, Jim_Obj *strObj) { char buf[256]; FILE *fh = JimFdOpenForRead(fd); int ret = 0; if (fh == NULL) { return -1; } while (1) { int retval = fread(buf, 1, sizeof(buf), fh); if (retval > 0) { ret = 1; Jim_AppendString(interp, strObj, buf, retval); } if (retval != sizeof(buf)) { break; } } fclose(fh); return ret; } static char **JimBuildEnv(Jim_Interp *interp) { int i; int size; int num; int n; char **envptr; char *envdata; Jim_Obj *objPtr = Jim_GetGlobalVariableStr(interp, "env", JIM_NONE); if (!objPtr) { return Jim_GetEnviron(); } num = Jim_ListLength(interp, objPtr); if (num % 2) { num--; } size = Jim_Length(objPtr) + 2; envptr = Jim_Alloc(sizeof(*envptr) * (num / 2 + 1) + size); envdata = (char *)&envptr[num / 2 + 1]; |
︙ | ︙ | |||
4306 4307 4308 4309 4310 4311 4312 | static void JimFreeEnv(char **env, char **original_environ) { if (env != original_environ) { Jim_Free(env); } } | > > > > > > > > > > > > > > > | | < | < < | | > | > | | | < | | > | | | < < < | < < > | | | | | | | | 4442 4443 4444 4445 4446 4447 4448 4449 4450 4451 4452 4453 4454 4455 4456 4457 4458 4459 4460 4461 4462 4463 4464 4465 4466 4467 4468 4469 4470 4471 4472 4473 4474 4475 4476 4477 4478 4479 4480 4481 4482 4483 4484 4485 4486 4487 4488 4489 4490 4491 4492 4493 4494 4495 4496 4497 4498 4499 4500 4501 4502 4503 4504 4505 4506 4507 4508 4509 4510 4511 4512 4513 4514 4515 4516 4517 4518 4519 4520 4521 4522 4523 4524 | static void JimFreeEnv(char **env, char **original_environ) { if (env != original_environ) { Jim_Free(env); } } #ifndef jim_ext_signal const char *Jim_SignalId(int sig) { static char buf[10]; snprintf(buf, sizeof(buf), "%d", sig); return buf; } const char *Jim_SignalName(int sig) { return Jim_SignalId(sig); } #endif static int JimCheckWaitStatus(Jim_Interp *interp, pidtype pid, int waitStatus, Jim_Obj *errStrObj) { Jim_Obj *errorCode; if (WIFEXITED(waitStatus) && WEXITSTATUS(waitStatus) == 0) { return JIM_OK; } errorCode = Jim_NewListObj(interp, NULL, 0); if (WIFEXITED(waitStatus)) { Jim_ListAppendElement(interp, errorCode, Jim_NewStringObj(interp, "CHILDSTATUS", -1)); Jim_ListAppendElement(interp, errorCode, Jim_NewIntObj(interp, (long)pid)); Jim_ListAppendElement(interp, errorCode, Jim_NewIntObj(interp, WEXITSTATUS(waitStatus))); } else { const char *type; const char *action; if (WIFSIGNALED(waitStatus)) { type = "CHILDKILLED"; action = "killed"; } else { type = "CHILDSUSP"; action = "suspended"; } Jim_ListAppendElement(interp, errorCode, Jim_NewStringObj(interp, type, -1)); if (errStrObj) { Jim_AppendStrings(interp, errStrObj, "child ", action, " by signal ", Jim_SignalId(WTERMSIG(waitStatus)), "\n", NULL); } Jim_ListAppendElement(interp, errorCode, Jim_NewIntObj(interp, (long)pid)); Jim_ListAppendElement(interp, errorCode, Jim_NewStringObj(interp, Jim_SignalId(WTERMSIG(waitStatus)), -1)); Jim_ListAppendElement(interp, errorCode, Jim_NewStringObj(interp, Jim_SignalName(WTERMSIG(waitStatus)), -1)); } Jim_SetGlobalVariableStr(interp, "errorCode", errorCode); return JIM_ERR; } struct WaitInfo { pidtype pid; int status; int flags; }; struct WaitInfoTable { struct WaitInfo *info; int size; int used; }; #define WI_DETACHED 2 #define WAIT_TABLE_GROW_BY 4 |
︙ | ︙ | |||
4391 4392 4393 4394 4395 4396 4397 | table->size = table->used = 0; return table; } static int Jim_ExecCmd(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { | | | > > > | > > | | | | > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > | | | | | | | 4537 4538 4539 4540 4541 4542 4543 4544 4545 4546 4547 4548 4549 4550 4551 4552 4553 4554 4555 4556 4557 4558 4559 4560 4561 4562 4563 4564 4565 4566 4567 4568 4569 4570 4571 4572 4573 4574 4575 4576 4577 4578 4579 4580 4581 4582 4583 4584 4585 4586 4587 4588 4589 4590 4591 4592 4593 4594 4595 4596 4597 4598 4599 4600 4601 4602 4603 4604 4605 4606 4607 4608 4609 4610 4611 4612 4613 4614 4615 4616 4617 4618 4619 4620 4621 4622 4623 4624 4625 4626 4627 4628 4629 4630 4631 4632 4633 4634 4635 4636 4637 4638 4639 4640 4641 4642 4643 4644 4645 4646 4647 4648 4649 4650 4651 4652 4653 4654 4655 4656 4657 4658 4659 4660 4661 4662 4663 4664 4665 4666 4667 4668 4669 4670 4671 4672 4673 4674 4675 4676 4677 4678 4679 4680 4681 4682 4683 4684 4685 4686 4687 4688 4689 4690 4691 4692 4693 | table->size = table->used = 0; return table; } static int Jim_ExecCmd(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { fdtype outputId; fdtype errorId; pidtype *pidPtr; int numPids, result; int child_siginfo = 1; Jim_Obj *childErrObj; Jim_Obj *errStrObj; if (argc > 1 && Jim_CompareStringImmediate(interp, argv[argc - 1], "&")) { Jim_Obj *listObj; int i; argc--; numPids = JimCreatePipeline(interp, argc - 1, argv + 1, &pidPtr, NULL, NULL, NULL); if (numPids < 0) { return JIM_ERR; } listObj = Jim_NewListObj(interp, NULL, 0); for (i = 0; i < numPids; i++) { Jim_ListAppendElement(interp, listObj, Jim_NewIntObj(interp, (long)pidPtr[i])); } Jim_SetResult(interp, listObj); JimDetachPids(interp, numPids, pidPtr); Jim_Free(pidPtr); return JIM_OK; } numPids = JimCreatePipeline(interp, argc - 1, argv + 1, &pidPtr, NULL, &outputId, &errorId); if (numPids < 0) { return JIM_ERR; } result = JIM_OK; errStrObj = Jim_NewStringObj(interp, "", 0); if (outputId != JIM_BAD_FD) { if (JimAppendStreamToString(interp, outputId, errStrObj) < 0) { result = JIM_ERR; Jim_SetResultErrno(interp, "error reading from output pipe"); } } childErrObj = Jim_NewStringObj(interp, "", 0); Jim_IncrRefCount(childErrObj); if (JimCleanupChildren(interp, numPids, pidPtr, childErrObj) != JIM_OK) { result = JIM_ERR; } if (errorId != JIM_BAD_FD) { int ret; JimRewindFd(errorId); ret = JimAppendStreamToString(interp, errorId, errStrObj); if (ret < 0) { Jim_SetResultErrno(interp, "error reading from error pipe"); result = JIM_ERR; } else if (ret > 0) { child_siginfo = 0; } } if (child_siginfo) { Jim_AppendObj(interp, errStrObj, childErrObj); } Jim_DecrRefCount(interp, childErrObj); Jim_RemoveTrailingNewline(errStrObj); Jim_SetResult(interp, errStrObj); return result; } static void JimReapDetachedPids(struct WaitInfoTable *table) { struct WaitInfo *waitPtr; int count; int dest; if (!table) { return; } waitPtr = table->info; dest = 0; for (count = table->used; count > 0; waitPtr++, count--) { if (waitPtr->flags & WI_DETACHED) { int status; pidtype pid = JimWaitPid(waitPtr->pid, &status, WNOHANG); if (pid == waitPtr->pid) { table->used--; continue; } } if (waitPtr != &table->info[dest]) { table->info[dest] = *waitPtr; } dest++; } } static pidtype JimWaitForProcess(struct WaitInfoTable *table, pidtype pid, int *statusPtr) { int i; for (i = 0; i < table->used; i++) { if (pid == table->info[i].pid) { JimWaitPid(pid, statusPtr, 0); if (i != table->used - 1) { table->info[i] = table->info[table->used - 1]; } table->used--; return pid; } } return JIM_BAD_PID; } static void JimDetachPids(Jim_Interp *interp, int numPids, const pidtype *pidPtr) { int j; struct WaitInfoTable *table = Jim_CmdPrivData(interp); for (j = 0; j < numPids; j++) { int i; for (i = 0; i < table->used; i++) { if (pidPtr[j] == table->info[i].pid) { table->info[i].flags |= WI_DETACHED; break; } } |
︙ | ︙ | |||
4534 4535 4536 4537 4538 4539 4540 | int numPids = 0; /* Actual number of processes that exist * at *pidPtr right now. */ int cmdCount; /* Count of number of distinct commands * found in argc/argv. */ const char *input = NULL; /* Describes input for pipeline, depending * on "inputFile". NULL means take input * from stdin/pipe. */ | | | | | | | 4716 4717 4718 4719 4720 4721 4722 4723 4724 4725 4726 4727 4728 4729 4730 4731 4732 4733 4734 4735 | int numPids = 0; /* Actual number of processes that exist * at *pidPtr right now. */ int cmdCount; /* Count of number of distinct commands * found in argc/argv. */ const char *input = NULL; /* Describes input for pipeline, depending * on "inputFile". NULL means take input * from stdin/pipe. */ int input_len = 0; #define FILE_NAME 0 #define FILE_APPEND 1 #define FILE_HANDLE 2 #define FILE_TEXT 3 int inputFile = FILE_NAME; /* 1 means input is name of input file. * 2 means input is filehandle name. * 0 means input holds actual * text to be input to command. */ int outputFile = FILE_NAME; /* 0 means output is the name of output file. |
︙ | ︙ | |||
4564 4565 4566 4567 4568 4569 4570 | * or NULL if output goes to stdout/pipe. */ const char *error = NULL; /* Holds name of stderr file to pipe to, * or NULL if stderr goes to stderr/pipe. */ fdtype inputId = JIM_BAD_FD; fdtype outputId = JIM_BAD_FD; fdtype errorId = JIM_BAD_FD; fdtype lastOutputId = JIM_BAD_FD; | | | | 4746 4747 4748 4749 4750 4751 4752 4753 4754 4755 4756 4757 4758 4759 4760 4761 4762 4763 4764 4765 4766 4767 4768 4769 | * or NULL if output goes to stdout/pipe. */ const char *error = NULL; /* Holds name of stderr file to pipe to, * or NULL if stderr goes to stderr/pipe. */ fdtype inputId = JIM_BAD_FD; fdtype outputId = JIM_BAD_FD; fdtype errorId = JIM_BAD_FD; fdtype lastOutputId = JIM_BAD_FD; fdtype pipeIds[2]; int firstArg, lastArg; /* Indexes of first and last arguments in * current command. */ int lastBar; int i; pidtype pid; char **save_environ; struct WaitInfoTable *table = Jim_CmdPrivData(interp); char **arg_array = Jim_Alloc(sizeof(*arg_array) * (argc + 1)); int arg_count = 0; JimReapDetachedPids(table); if (inPipePtr != NULL) { *inPipePtr = JIM_BAD_FD; |
︙ | ︙ | |||
4623 4624 4625 4626 4627 4628 4629 | output = arg + 1; if (*output == '>') { outputFile = FILE_APPEND; output++; } if (*output == '&') { | | | 4805 4806 4807 4808 4809 4810 4811 4812 4813 4814 4815 4816 4817 4818 4819 | output = arg + 1; if (*output == '>') { outputFile = FILE_APPEND; output++; } if (*output == '&') { output++; dup_error = 1; } if (*output == '@') { outputFile = FILE_HANDLE; output++; } |
︙ | ︙ | |||
4664 4665 4666 4667 4668 4669 4670 | if (i == lastBar + 1 || i == argc - 1) { Jim_SetResultString(interp, "illegal use of | or |& in command", -1); goto badargs; } lastBar = i; cmdCount++; } | | | | | 4846 4847 4848 4849 4850 4851 4852 4853 4854 4855 4856 4857 4858 4859 4860 4861 4862 4863 4864 4865 4866 4867 4868 4869 4870 4871 4872 4873 4874 4875 4876 4877 4878 4879 4880 4881 4882 4883 4884 4885 4886 4887 4888 4889 | if (i == lastBar + 1 || i == argc - 1) { Jim_SetResultString(interp, "illegal use of | or |& in command", -1); goto badargs; } lastBar = i; cmdCount++; } arg_array[arg_count++] = (char *)arg; continue; } if (i >= argc) { Jim_SetResultFormatted(interp, "can't specify \"%s\" as last word in command", arg); goto badargs; } } if (arg_count == 0) { Jim_SetResultString(interp, "didn't specify command to execute", -1); badargs: Jim_Free(arg_array); return -1; } save_environ = JimSaveEnv(JimBuildEnv(interp)); if (input != NULL) { if (inputFile == FILE_TEXT) { inputId = JimCreateTemp(interp, input, input_len); if (inputId == JIM_BAD_FD) { goto error; } } else if (inputFile == FILE_HANDLE) { FILE *fh = JimGetAioFilehandle(interp, input); if (fh == NULL) { goto error; } inputId = JimDupFd(JimFileno(fh)); } |
︙ | ︙ | |||
4745 4746 4747 4748 4749 4750 4751 | Jim_SetResultErrno(interp, "couldn't create output pipe"); goto error; } lastOutputId = pipeIds[1]; *outPipePtr = pipeIds[0]; pipeIds[0] = pipeIds[1] = JIM_BAD_FD; } | | | | | 4927 4928 4929 4930 4931 4932 4933 4934 4935 4936 4937 4938 4939 4940 4941 4942 4943 4944 4945 4946 4947 4948 4949 4950 | Jim_SetResultErrno(interp, "couldn't create output pipe"); goto error; } lastOutputId = pipeIds[1]; *outPipePtr = pipeIds[0]; pipeIds[0] = pipeIds[1] = JIM_BAD_FD; } if (error != NULL) { if (errorFile == FILE_HANDLE) { if (strcmp(error, "1") == 0) { if (lastOutputId != JIM_BAD_FD) { errorId = JimDupFd(lastOutputId); } else { error = "stdout"; } } if (errorId == JIM_BAD_FD) { FILE *fh = JimGetAioFilehandle(interp, error); if (fh == NULL) { goto error; |
︙ | ︙ | |||
4800 4801 4802 4803 4804 4805 4806 | if (arg_array[lastArg][0] == '|') { if (arg_array[lastArg][1] == '&') { pipe_dup_err = 1; } break; } } | | | | | | | | | | 4982 4983 4984 4985 4986 4987 4988 4989 4990 4991 4992 4993 4994 4995 4996 4997 4998 4999 5000 5001 5002 5003 5004 5005 5006 5007 5008 5009 5010 5011 5012 5013 5014 5015 5016 5017 5018 5019 5020 5021 5022 5023 5024 5025 5026 5027 5028 5029 5030 5031 5032 5033 5034 5035 5036 5037 5038 5039 5040 5041 5042 5043 5044 5045 5046 5047 5048 5049 5050 5051 5052 5053 5054 5055 5056 5057 5058 5059 5060 5061 5062 5063 | if (arg_array[lastArg][0] == '|') { if (arg_array[lastArg][1] == '&') { pipe_dup_err = 1; } break; } } arg_array[lastArg] = NULL; if (lastArg == arg_count) { outputId = lastOutputId; } else { if (JimPipe(pipeIds) != 0) { Jim_SetResultErrno(interp, "couldn't create pipe"); goto error; } outputId = pipeIds[1]; } if (pipe_dup_err) { errorId = outputId; } #ifdef __MINGW32__ pid = JimStartWinProcess(interp, &arg_array[firstArg], save_environ ? save_environ[0] : NULL, inputId, outputId, errorId); if (pid == JIM_BAD_PID) { Jim_SetResultFormatted(interp, "couldn't exec \"%s\"", arg_array[firstArg]); goto error; } #else pid = vfork(); if (pid < 0) { Jim_SetResultErrno(interp, "couldn't fork child process"); goto error; } if (pid == 0) { if (inputId != -1) dup2(inputId, 0); if (outputId != -1) dup2(outputId, 1); if (errorId != -1) dup2(errorId, 2); for (i = 3; (i <= outputId) || (i <= inputId) || (i <= errorId); i++) { close(i); } (void)signal(SIGPIPE, SIG_DFL); execvpe(arg_array[firstArg], &arg_array[firstArg], Jim_GetEnviron()); fprintf(stderr, "couldn't exec \"%s\"\n", arg_array[firstArg]); _exit(127); } #endif if (table->used == table->size) { table->size += WAIT_TABLE_GROW_BY; table->info = Jim_Realloc(table->info, table->size * sizeof(*table->info)); } table->info[table->used].pid = pid; table->info[table->used].flags = 0; table->used++; pidPtr[numPids] = pid; errorId = origErrorId; if (inputId != JIM_BAD_FD) { JimCloseFd(inputId); } if (outputId != JIM_BAD_FD) { |
︙ | ︙ | |||
4932 4933 4934 4935 4936 4937 4938 | Jim_Free(pidPtr); } numPids = -1; goto cleanup; } | | > | < < < < < < < < < | 5114 5115 5116 5117 5118 5119 5120 5121 5122 5123 5124 5125 5126 5127 5128 5129 5130 5131 5132 5133 5134 5135 5136 5137 5138 5139 5140 5141 5142 5143 5144 | Jim_Free(pidPtr); } numPids = -1; goto cleanup; } static int JimCleanupChildren(Jim_Interp *interp, int numPids, pidtype *pidPtr, Jim_Obj *errStrObj) { struct WaitInfoTable *table = Jim_CmdPrivData(interp); int result = JIM_OK; int i; for (i = 0; i < numPids; i++) { int waitStatus = 0; if (JimWaitForProcess(table, pidPtr[i], &waitStatus) != JIM_BAD_PID) { if (JimCheckWaitStatus(interp, pidPtr[i], waitStatus, errStrObj) != JIM_OK) { result = JIM_ERR; } } } Jim_Free(pidPtr); return result; } int Jim_execInit(Jim_Interp *interp) { if (Jim_PackageProvide(interp, "exec", "1.0", JIM_ERRMSG)) return JIM_ERR; |
︙ | ︙ | |||
5119 5120 5121 5122 5123 5124 5125 | { return (fdtype)_get_osfhandle(_fileno(fh)); } static fdtype JimOpenForRead(const char *filename) { return CreateFile(filename, GENERIC_READ, FILE_SHARE_READ | FILE_SHARE_WRITE, | | | | > > > > | | 5293 5294 5295 5296 5297 5298 5299 5300 5301 5302 5303 5304 5305 5306 5307 5308 5309 5310 5311 5312 5313 5314 5315 5316 5317 5318 5319 5320 5321 5322 5323 5324 5325 5326 5327 5328 5329 | { return (fdtype)_get_osfhandle(_fileno(fh)); } static fdtype JimOpenForRead(const char *filename) { return CreateFile(filename, GENERIC_READ, FILE_SHARE_READ | FILE_SHARE_WRITE, JimStdSecAttrs(), OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); } static fdtype JimOpenForWrite(const char *filename, int append) { fdtype fd = CreateFile(filename, GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, JimStdSecAttrs(), append ? OPEN_ALWAYS : CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, (HANDLE) NULL); if (append && fd != JIM_BAD_FD) { SetFilePointer(fd, 0, NULL, FILE_END); } return fd; } static FILE *JimFdOpenForWrite(fdtype fd) { return _fdopen(_open_osfhandle((int)fd, _O_TEXT), "w"); } static pidtype JimWaitPid(pidtype pid, int *status, int nohang) { DWORD ret = WaitForSingleObject(pid, nohang ? 0 : INFINITE); if (ret == WAIT_TIMEOUT || ret == WAIT_FAILED) { return JIM_BAD_PID; } GetExitCodeProcess(pid, &ret); *status = ret; CloseHandle(pid); return pid; } |
︙ | ︙ | |||
5164 5165 5166 5167 5168 5169 5170 | NULL); if (handle == INVALID_HANDLE_VALUE) { goto error; } if (contents != NULL) { | | | 5342 5343 5344 5345 5346 5347 5348 5349 5350 5351 5352 5353 5354 5355 5356 | NULL); if (handle == INVALID_HANDLE_VALUE) { goto error; } if (contents != NULL) { FILE *fh = JimFdOpenForWrite(JimDupFd(handle)); if (fh == NULL) { goto error; } if (fwrite(contents, len, 1, fh) != 1) { fclose(fh); |
︙ | ︙ | |||
5193 5194 5195 5196 5197 5198 5199 | static int JimWinFindExecutable(const char *originalName, char fullPath[MAX_PATH]) { int i; static char extensions[][5] = {".exe", "", ".bat"}; for (i = 0; i < (int) (sizeof(extensions) / sizeof(extensions[0])); i++) { | < | | 5371 5372 5373 5374 5375 5376 5377 5378 5379 5380 5381 5382 5383 5384 5385 | static int JimWinFindExecutable(const char *originalName, char fullPath[MAX_PATH]) { int i; static char extensions[][5] = {".exe", "", ".bat"}; for (i = 0; i < (int) (sizeof(extensions) / sizeof(extensions[0])); i++) { snprintf(fullPath, MAX_PATH, "%s%s", originalName, extensions[i]); if (SearchPath(NULL, fullPath, NULL, MAX_PATH, fullPath, NULL) == 0) { continue; } if (GetFileAttributes(fullPath) & FILE_ATTRIBUTE_DIRECTORY) { continue; } |
︙ | ︙ | |||
5437 5438 5439 5440 5441 5442 5443 | #ifdef HAVE_SYS_TIME_H #include <sys/time.h> #endif static int clock_cmd_format(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { | | | 5614 5615 5616 5617 5618 5619 5620 5621 5622 5623 5624 5625 5626 5627 5628 | #ifdef HAVE_SYS_TIME_H #include <sys/time.h> #endif static int clock_cmd_format(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { char buf[100]; time_t t; long seconds; const char *format = "%a %b %d %H:%M:%S %Z %Y"; if (argc == 2 || (argc == 3 && !Jim_CompareStringImmediate(interp, argv[1], "-format"))) { |
︙ | ︙ | |||
5478 5479 5480 5481 5482 5483 5484 | struct tm tm; time_t now = time(0); if (!Jim_CompareStringImmediate(interp, argv[1], "-format")) { return -1; } | | | | 5655 5656 5657 5658 5659 5660 5661 5662 5663 5664 5665 5666 5667 5668 5669 5670 5671 5672 5673 5674 5675 5676 5677 5678 | struct tm tm; time_t now = time(0); if (!Jim_CompareStringImmediate(interp, argv[1], "-format")) { return -1; } localtime_r(&now, &tm); pt = strptime(Jim_String(argv[0]), Jim_String(argv[2]), &tm); if (pt == 0 || *pt != 0) { Jim_SetResultString(interp, "Failed to parse time according to format", -1); return JIM_ERR; } Jim_SetResultInt(interp, mktime(&tm)); return JIM_OK; } #endif static int clock_cmd_seconds(Jim_Interp *interp, int argc, Jim_Obj *const *argv) |
︙ | ︙ | |||
5529 5530 5531 5532 5533 5534 5535 | static const jim_subcmd_type clock_command_table[] = { { "seconds", NULL, clock_cmd_seconds, 0, 0, | | | | | | | | 5706 5707 5708 5709 5710 5711 5712 5713 5714 5715 5716 5717 5718 5719 5720 5721 5722 5723 5724 5725 5726 5727 5728 5729 5730 5731 5732 5733 5734 5735 5736 5737 5738 5739 5740 5741 5742 5743 5744 5745 5746 5747 5748 5749 5750 5751 5752 5753 5754 5755 5756 | static const jim_subcmd_type clock_command_table[] = { { "seconds", NULL, clock_cmd_seconds, 0, 0, }, { "clicks", NULL, clock_cmd_micros, 0, 0, }, { "microseconds", NULL, clock_cmd_micros, 0, 0, }, { "milliseconds", NULL, clock_cmd_millis, 0, 0, }, { "format", "seconds ?-format format?", clock_cmd_format, 1, 3, }, #ifdef HAVE_STRPTIME { "scan", "str -format format", clock_cmd_scan, 3, 3, }, #endif { NULL } }; int Jim_clockInit(Jim_Interp *interp) { |
︙ | ︙ | |||
5589 5590 5591 5592 5593 5594 5595 | #include <string.h> #include <stdio.h> #include <errno.h> static int array_cmd_exists(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { | | | | | | 5766 5767 5768 5769 5770 5771 5772 5773 5774 5775 5776 5777 5778 5779 5780 5781 5782 5783 5784 5785 5786 5787 5788 5789 5790 5791 5792 5793 5794 5795 5796 5797 5798 5799 5800 5801 5802 5803 5804 5805 | #include <string.h> #include <stdio.h> #include <errno.h> static int array_cmd_exists(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { Jim_SetResultInt(interp, Jim_GetVariable(interp, argv[0], 0) != 0); return JIM_OK; } static int array_cmd_get(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { Jim_Obj *objPtr = Jim_GetVariable(interp, argv[0], JIM_NONE); Jim_Obj *patternObj; if (!objPtr) { return JIM_OK; } patternObj = (argc == 1) ? NULL : argv[1]; if (patternObj == NULL || Jim_CompareStringImmediate(interp, patternObj, "*")) { if (Jim_IsList(objPtr) && Jim_ListLength(interp, objPtr) % 2 == 0) { Jim_SetResult(interp, objPtr); return JIM_OK; } } return Jim_DictValues(interp, objPtr, patternObj); } static int array_cmd_names(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { Jim_Obj *objPtr = Jim_GetVariable(interp, argv[0], JIM_NONE); |
︙ | ︙ | |||
5638 5639 5640 5641 5642 5643 5644 | int i; int len; Jim_Obj *resultObj; Jim_Obj *objPtr; Jim_Obj **dictValuesObj; if (argc == 1 || Jim_CompareStringImmediate(interp, argv[1], "*")) { | | | | | | 5815 5816 5817 5818 5819 5820 5821 5822 5823 5824 5825 5826 5827 5828 5829 5830 5831 5832 5833 5834 5835 5836 5837 5838 5839 5840 5841 5842 5843 5844 5845 5846 5847 5848 5849 5850 5851 5852 5853 5854 5855 5856 5857 5858 5859 5860 5861 5862 5863 5864 | int i; int len; Jim_Obj *resultObj; Jim_Obj *objPtr; Jim_Obj **dictValuesObj; if (argc == 1 || Jim_CompareStringImmediate(interp, argv[1], "*")) { Jim_UnsetVariable(interp, argv[0], JIM_NONE); return JIM_OK; } objPtr = Jim_GetVariable(interp, argv[0], JIM_NONE); if (objPtr == NULL) { return JIM_OK; } if (Jim_DictPairs(interp, objPtr, &dictValuesObj, &len) != JIM_OK) { return JIM_ERR; } resultObj = Jim_NewDictObj(interp, NULL, 0); for (i = 0; i < len; i += 2) { if (!Jim_StringMatchObj(interp, argv[1], dictValuesObj[i], 0)) { Jim_DictAddElement(interp, resultObj, dictValuesObj[i], dictValuesObj[i + 1]); } } Jim_Free(dictValuesObj); Jim_SetVariable(interp, argv[0], resultObj); return JIM_OK; } static int array_cmd_size(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { Jim_Obj *objPtr; int len = 0; objPtr = Jim_GetVariable(interp, argv[0], JIM_NONE); if (objPtr) { len = Jim_DictSize(interp, objPtr); if (len < 0) { return JIM_ERR; } } |
︙ | ︙ | |||
5712 5713 5714 5715 5716 5717 5718 | if (len % 2) { Jim_SetResultString(interp, "list must have an even number of elements", -1); return JIM_ERR; } dictObj = Jim_GetVariable(interp, argv[0], JIM_UNSHARED); if (!dictObj) { | | | 5889 5890 5891 5892 5893 5894 5895 5896 5897 5898 5899 5900 5901 5902 5903 | if (len % 2) { Jim_SetResultString(interp, "list must have an even number of elements", -1); return JIM_ERR; } dictObj = Jim_GetVariable(interp, argv[0], JIM_UNSHARED); if (!dictObj) { return Jim_SetVariable(interp, argv[0], listObj); } else if (Jim_DictSize(interp, dictObj) < 0) { return JIM_ERR; } if (Jim_IsShared(dictObj)) { |
︙ | ︙ | |||
5741 5742 5743 5744 5745 5746 5747 | static const jim_subcmd_type array_command_table[] = { { "exists", "arrayName", array_cmd_exists, 1, 1, | | | | | | | | < > < > | > | 5918 5919 5920 5921 5922 5923 5924 5925 5926 5927 5928 5929 5930 5931 5932 5933 5934 5935 5936 5937 5938 5939 5940 5941 5942 5943 5944 5945 5946 5947 5948 5949 5950 5951 5952 5953 5954 5955 5956 5957 5958 5959 5960 5961 5962 5963 5964 5965 5966 5967 5968 5969 5970 5971 5972 5973 5974 5975 5976 5977 5978 5979 5980 5981 5982 5983 5984 5985 5986 5987 5988 5989 5990 5991 5992 5993 5994 5995 5996 5997 5998 5999 6000 6001 6002 6003 6004 6005 6006 6007 6008 6009 6010 6011 6012 6013 6014 6015 | static const jim_subcmd_type array_command_table[] = { { "exists", "arrayName", array_cmd_exists, 1, 1, }, { "get", "arrayName ?pattern?", array_cmd_get, 1, 2, }, { "names", "arrayName ?pattern?", array_cmd_names, 1, 2, }, { "set", "arrayName list", array_cmd_set, 2, 2, }, { "size", "arrayName", array_cmd_size, 1, 1, }, { "stat", "arrayName", array_cmd_stat, 1, 1, }, { "unset", "arrayName ?pattern?", array_cmd_unset, 1, 2, }, { NULL } }; int Jim_arrayInit(Jim_Interp *interp) { if (Jim_PackageProvide(interp, "array", "1.0", JIM_ERRMSG)) return JIM_ERR; Jim_CreateCommand(interp, "array", Jim_SubCmdProc, (void *)array_command_table, NULL); return JIM_OK; } int Jim_InitStaticExtensions(Jim_Interp *interp) { extern int Jim_bootstrapInit(Jim_Interp *); extern int Jim_aioInit(Jim_Interp *); extern int Jim_readdirInit(Jim_Interp *); extern int Jim_regexpInit(Jim_Interp *); extern int Jim_fileInit(Jim_Interp *); extern int Jim_globInit(Jim_Interp *); extern int Jim_execInit(Jim_Interp *); extern int Jim_clockInit(Jim_Interp *); extern int Jim_arrayInit(Jim_Interp *); extern int Jim_stdlibInit(Jim_Interp *); extern int Jim_tclcompatInit(Jim_Interp *); Jim_bootstrapInit(interp); Jim_aioInit(interp); Jim_readdirInit(interp); Jim_regexpInit(interp); Jim_fileInit(interp); Jim_globInit(interp); Jim_execInit(interp); Jim_clockInit(interp); Jim_arrayInit(interp); Jim_stdlibInit(interp); Jim_tclcompatInit(interp); return JIM_OK; } #define JIM_OPTIMIZATION #define _GNU_SOURCE #include <stdio.h> #include <stdlib.h> #include <string.h> #include <stdarg.h> #include <ctype.h> |
︙ | ︙ | |||
5892 5893 5894 5895 5896 5897 5898 5899 5900 5901 5902 5903 5904 5905 | #ifdef JIM_DEBUG_PANIC static void JimPanicDump(int fail_condition, const char *fmt, ...); #define JimPanic(X) JimPanicDump X #else #define JimPanic(X) #endif static char JimEmptyStringRep[] = ""; static void JimFreeCallFrame(Jim_Interp *interp, Jim_CallFrame *cf, int action); static int ListSetIndex(Jim_Interp *interp, Jim_Obj *listPtr, int listindex, Jim_Obj *newObjPtr, int flags); static int JimDeleteLocalProcs(Jim_Interp *interp, Jim_Stack *localCommands); | > > > > > > | 6070 6071 6072 6073 6074 6075 6076 6077 6078 6079 6080 6081 6082 6083 6084 6085 6086 6087 6088 6089 | #ifdef JIM_DEBUG_PANIC static void JimPanicDump(int fail_condition, const char *fmt, ...); #define JimPanic(X) JimPanicDump X #else #define JimPanic(X) #endif #ifdef JIM_OPTIMIZATION #define JIM_IF_OPTIM(X) X #else #define JIM_IF_OPTIM(X) #endif static char JimEmptyStringRep[] = ""; static void JimFreeCallFrame(Jim_Interp *interp, Jim_CallFrame *cf, int action); static int ListSetIndex(Jim_Interp *interp, Jim_Obj *listPtr, int listindex, Jim_Obj *newObjPtr, int flags); static int JimDeleteLocalProcs(Jim_Interp *interp, Jim_Stack *localCommands); |
︙ | ︙ | |||
5948 5949 5950 5951 5952 5953 5954 | if (flags & JIM_CHARSET_SCAN) { if (*pattern == '^') { not++; pattern++; } | | | | | | | 6132 6133 6134 6135 6136 6137 6138 6139 6140 6141 6142 6143 6144 6145 6146 6147 6148 6149 6150 6151 6152 6153 6154 6155 6156 6157 6158 6159 6160 6161 6162 6163 6164 6165 6166 6167 6168 6169 | if (flags & JIM_CHARSET_SCAN) { if (*pattern == '^') { not++; pattern++; } if (*pattern == ']') { goto first; } } while (*pattern && *pattern != ']') { if (pattern[0] == '\\') { first: pattern += utf8_tounicode_case(pattern, &pchar, nocase); } else { int start; int end; pattern += utf8_tounicode_case(pattern, &start, nocase); if (pattern[0] == '-' && pattern[1]) { pattern += utf8_tounicode(pattern, &pchar); pattern += utf8_tounicode_case(pattern, &end, nocase); if ((c >= start && c <= end) || (c >= end && c <= start)) { match = 1; } continue; } pchar = start; } |
︙ | ︙ | |||
6005 6006 6007 6008 6009 6010 6011 | switch (pattern[0]) { case '*': while (pattern[1] == '*') { pattern++; } pattern++; if (!pattern[0]) { | | | | | | | | 6189 6190 6191 6192 6193 6194 6195 6196 6197 6198 6199 6200 6201 6202 6203 6204 6205 6206 6207 6208 6209 6210 6211 6212 6213 6214 6215 6216 6217 6218 6219 6220 6221 6222 6223 6224 6225 6226 6227 6228 6229 6230 6231 6232 6233 | switch (pattern[0]) { case '*': while (pattern[1] == '*') { pattern++; } pattern++; if (!pattern[0]) { return 1; } while (*string) { if (JimGlobMatch(pattern, string, nocase)) return 1; string += utf8_tounicode(string, &c); } return 0; case '?': string += utf8_tounicode(string, &c); break; case '[': { string += utf8_tounicode(string, &c); pattern = JimCharsetMatch(pattern + 1, c, nocase ? JIM_NOCASE : 0); if (!pattern) { return 0; } if (!*pattern) { continue; } break; } case '\\': if (pattern[1]) { pattern++; } default: string += utf8_tounicode_case(string, &c, nocase); utf8_tounicode_case(pattern, &pchar, nocase); if (pchar != c) { return 0; } break; |
︙ | ︙ | |||
6085 6086 6087 6088 6089 6090 6091 | return JimSign(c1 - c2); } maxchars--; } if (!maxchars) { return 0; } | | | 6269 6270 6271 6272 6273 6274 6275 6276 6277 6278 6279 6280 6281 6282 6283 | return JimSign(c1 - c2); } maxchars--; } if (!maxchars) { return 0; } if (*s1) { return 1; } if (*s2) { return -1; } return 0; |
︙ | ︙ | |||
6126 6127 6128 6129 6130 6131 6132 | static int JimStringLast(const char *s1, int l1, const char *s2, int l2) { const char *p; if (!l1 || !l2 || l1 > l2) return -1; | | | 6310 6311 6312 6313 6314 6315 6316 6317 6318 6319 6320 6321 6322 6323 6324 | static int JimStringLast(const char *s1, int l1, const char *s2, int l2) { const char *p; if (!l1 || !l2 || l1 > l2) return -1; for (p = s2 + l2 - 1; p != s2 - 1; p--) { if (*p == *s1 && memcmp(s1, p, l1) == 0) { return p - s2; } } return -1; } |
︙ | ︙ | |||
6185 6186 6187 6188 6189 6190 6191 | if (str[i] == '+') { i++; } *sign = 1; } if (str[i] != '0') { | | | | | | | | | 6369 6370 6371 6372 6373 6374 6375 6376 6377 6378 6379 6380 6381 6382 6383 6384 6385 6386 6387 6388 6389 6390 6391 6392 6393 6394 6395 6396 6397 6398 6399 6400 6401 6402 6403 6404 6405 6406 6407 6408 6409 6410 6411 6412 6413 6414 6415 6416 6417 6418 6419 6420 6421 6422 6423 6424 6425 6426 6427 6428 6429 6430 6431 6432 6433 6434 6435 6436 6437 | if (str[i] == '+') { i++; } *sign = 1; } if (str[i] != '0') { return 0; } switch (str[i + 1]) { case 'x': case 'X': *base = 16; break; case 'o': case 'O': *base = 8; break; case 'b': case 'B': *base = 2; break; default: return 0; } i += 2; if (str[i] != '-' && str[i] != '+' && !isspace(UCHAR(str[i]))) { return i; } *base = 10; return 0; } static long jim_strtol(const char *str, char **endptr) { int sign; int base; int i = JimNumberBase(str, &base, &sign); if (base != 10) { long value = strtol(str + i, endptr, base); if (endptr == NULL || *endptr != str + i) { return value * sign; } } return strtol(str, endptr, 10); } static jim_wide jim_strtoull(const char *str, char **endptr) { #ifdef HAVE_LONG_LONG int sign; int base; int i = JimNumberBase(str, &base, &sign); if (base != 10) { jim_wide value = strtoull(str + i, endptr, base); if (endptr == NULL || *endptr != str + i) { return value * sign; } } return strtoull(str, endptr, 10); #else return (unsigned long)jim_strtol(str, endptr); #endif } int Jim_StringToWide(const char *str, jim_wide * widePtr, int base) |
︙ | ︙ | |||
6264 6265 6266 6267 6268 6269 6270 | return JimCheckConversion(str, endptr); } int Jim_StringToDouble(const char *str, double *doublePtr) { char *endptr; | | | > > > > > | > | > | > > > > | > > > | 6448 6449 6450 6451 6452 6453 6454 6455 6456 6457 6458 6459 6460 6461 6462 6463 6464 6465 6466 6467 6468 6469 6470 6471 6472 6473 6474 6475 6476 6477 6478 6479 6480 6481 6482 6483 6484 6485 6486 6487 6488 6489 6490 6491 | return JimCheckConversion(str, endptr); } int Jim_StringToDouble(const char *str, double *doublePtr) { char *endptr; errno = 0; *doublePtr = strtod(str, &endptr); return JimCheckConversion(str, endptr); } static jim_wide JimPowWide(jim_wide b, jim_wide e) { jim_wide res = 1; if (b == 1) { return 1; } if (e < 0) { if (b != -1) { return 0; } e = -e; } while (e) { if (e & 1) { res *= b; } e >>= 1; b *= b; } return res; } #ifdef JIM_DEBUG_PANIC static void JimPanicDump(int condition, const char *fmt, ...) { |
︙ | ︙ | |||
6345 6346 6347 6348 6349 6350 6351 | } char *Jim_StrDupLen(const char *s, int l) { char *copy = Jim_Alloc(l + 1); memcpy(copy, s, l + 1); | | | 6543 6544 6545 6546 6547 6548 6549 6550 6551 6552 6553 6554 6555 6556 6557 | } char *Jim_StrDupLen(const char *s, int l) { char *copy = Jim_Alloc(l + 1); memcpy(copy, s, l + 1); copy[l] = 0; return copy; } static jim_wide JimClock(void) { |
︙ | ︙ | |||
6434 6435 6436 6437 6438 6439 6440 | minimal = JIM_HT_INITIAL_SIZE; Jim_ExpandHashTable(ht, minimal); } void Jim_ExpandHashTable(Jim_HashTable *ht, unsigned int size) { | | | | | | | | | | 6632 6633 6634 6635 6636 6637 6638 6639 6640 6641 6642 6643 6644 6645 6646 6647 6648 6649 6650 6651 6652 6653 6654 6655 6656 6657 6658 6659 6660 6661 6662 6663 6664 6665 6666 6667 6668 6669 6670 6671 6672 6673 6674 6675 6676 6677 6678 6679 6680 6681 6682 6683 6684 6685 6686 6687 6688 6689 6690 6691 6692 6693 6694 6695 6696 6697 6698 6699 6700 | minimal = JIM_HT_INITIAL_SIZE; Jim_ExpandHashTable(ht, minimal); } void Jim_ExpandHashTable(Jim_HashTable *ht, unsigned int size) { Jim_HashTable n; unsigned int realsize = JimHashTableNextPower(size), i; if (size <= ht->used) return; Jim_InitHashTable(&n, ht->type, ht->privdata); n.size = realsize; n.sizemask = realsize - 1; n.table = Jim_Alloc(realsize * sizeof(Jim_HashEntry *)); n.uniq = ht->uniq; memset(n.table, 0, realsize * sizeof(Jim_HashEntry *)); n.used = ht->used; for (i = 0; ht->used > 0; i++) { Jim_HashEntry *he, *nextHe; if (ht->table[i] == NULL) continue; he = ht->table[i]; while (he) { unsigned int h; nextHe = he->next; h = Jim_HashKey(ht, he->key) & n.sizemask; he->next = n.table[h]; n.table[h] = he; ht->used--; he = nextHe; } } assert(ht->used == 0); Jim_Free(ht->table); *ht = n; } int Jim_AddHashEntry(Jim_HashTable *ht, const void *key, void *val) { Jim_HashEntry *entry; entry = JimInsertHashEntry(ht, key, 0); if (entry == NULL) return JIM_ERR; Jim_SetHashKey(ht, entry, key); Jim_SetHashVal(ht, entry, val); return JIM_OK; } int Jim_ReplaceHashEntry(Jim_HashTable *ht, const void *key, void *val) |
︙ | ︙ | |||
6514 6515 6516 6517 6518 6519 6520 | else { Jim_FreeEntryVal(ht, entry); Jim_SetHashVal(ht, entry, val); } existed = 1; } else { | | | 6712 6713 6714 6715 6716 6717 6718 6719 6720 6721 6722 6723 6724 6725 6726 | else { Jim_FreeEntryVal(ht, entry); Jim_SetHashVal(ht, entry, val); } existed = 1; } else { Jim_SetHashKey(ht, entry, key); Jim_SetHashVal(ht, entry, val); existed = 0; } return existed; } |
︙ | ︙ | |||
6537 6538 6539 6540 6541 6542 6543 | return JIM_ERR; h = Jim_HashKey(ht, key) & ht->sizemask; he = ht->table[h]; prevHe = NULL; while (he) { if (Jim_CompareHashKeys(ht, key, he->key)) { | | | | | | | | 6735 6736 6737 6738 6739 6740 6741 6742 6743 6744 6745 6746 6747 6748 6749 6750 6751 6752 6753 6754 6755 6756 6757 6758 6759 6760 6761 6762 6763 6764 6765 6766 6767 6768 6769 6770 6771 6772 6773 6774 6775 6776 6777 6778 6779 6780 6781 6782 6783 6784 6785 6786 6787 6788 6789 6790 | return JIM_ERR; h = Jim_HashKey(ht, key) & ht->sizemask; he = ht->table[h]; prevHe = NULL; while (he) { if (Jim_CompareHashKeys(ht, key, he->key)) { if (prevHe) prevHe->next = he->next; else ht->table[h] = he->next; Jim_FreeEntryKey(ht, he); Jim_FreeEntryVal(ht, he); Jim_Free(he); ht->used--; return JIM_OK; } prevHe = he; he = he->next; } return JIM_ERR; } int Jim_FreeHashTable(Jim_HashTable *ht) { unsigned int i; for (i = 0; ht->used > 0; i++) { Jim_HashEntry *he, *nextHe; if ((he = ht->table[i]) == NULL) continue; while (he) { nextHe = he->next; Jim_FreeEntryKey(ht, he); Jim_FreeEntryVal(ht, he); Jim_Free(he); ht->used--; he = nextHe; } } Jim_Free(ht->table); JimResetHashTable(ht); return JIM_OK; } Jim_HashEntry *Jim_FindHashEntry(Jim_HashTable *ht, const void *key) { Jim_HashEntry *he; unsigned int h; |
︙ | ︙ | |||
6655 6656 6657 6658 6659 6660 6661 | } static Jim_HashEntry *JimInsertHashEntry(Jim_HashTable *ht, const void *key, int replace) { unsigned int h; Jim_HashEntry *he; | | | | | | 6853 6854 6855 6856 6857 6858 6859 6860 6861 6862 6863 6864 6865 6866 6867 6868 6869 6870 6871 6872 6873 6874 6875 6876 6877 6878 6879 6880 | } static Jim_HashEntry *JimInsertHashEntry(Jim_HashTable *ht, const void *key, int replace) { unsigned int h; Jim_HashEntry *he; JimExpandHashTableIfNeeded(ht); h = Jim_HashKey(ht, key) & ht->sizemask; he = ht->table[h]; while (he) { if (Jim_CompareHashKeys(ht, key, he->key)) return replace ? he : NULL; he = he->next; } he = Jim_Alloc(sizeof(*he)); he->next = ht->table[h]; ht->table[h] = he; ht->used++; he->key = NULL; return he; |
︙ | ︙ | |||
6701 6702 6703 6704 6705 6706 6707 | static void JimStringCopyHTKeyDestructor(void *privdata, void *key) { Jim_Free(key); } static const Jim_HashTableType JimPackageHashTableType = { | | | | | | | | | | | | | | 6899 6900 6901 6902 6903 6904 6905 6906 6907 6908 6909 6910 6911 6912 6913 6914 6915 6916 6917 6918 6919 6920 6921 6922 6923 6924 6925 6926 6927 6928 6929 6930 6931 6932 6933 6934 6935 6936 6937 6938 6939 6940 6941 6942 | static void JimStringCopyHTKeyDestructor(void *privdata, void *key) { Jim_Free(key); } static const Jim_HashTableType JimPackageHashTableType = { JimStringCopyHTHashFunction, JimStringCopyHTDup, NULL, JimStringCopyHTKeyCompare, JimStringCopyHTKeyDestructor, NULL }; typedef struct AssocDataValue { Jim_InterpDeleteProc *delProc; void *data; } AssocDataValue; static void JimAssocDataHashTableValueDestructor(void *privdata, void *data) { AssocDataValue *assocPtr = (AssocDataValue *) data; if (assocPtr->delProc != NULL) assocPtr->delProc((Jim_Interp *)privdata, assocPtr->data); Jim_Free(data); } static const Jim_HashTableType JimAssocDataHashTableType = { JimStringCopyHTHashFunction, JimStringCopyHTDup, NULL, JimStringCopyHTKeyCompare, JimStringCopyHTKeyDestructor, JimAssocDataHashTableValueDestructor }; void Jim_InitStack(Jim_Stack *stack) { stack->len = 0; stack->maxlen = 0; stack->vector = NULL; |
︙ | ︙ | |||
6787 6788 6789 6790 6791 6792 6793 | for (i = 0; i < stack->len; i++) freeFunc(stack->vector[i]); } | | | | | | | | | | | | > | > | < < | | | | | | | | | | | | | 6985 6986 6987 6988 6989 6990 6991 6992 6993 6994 6995 6996 6997 6998 6999 7000 7001 7002 7003 7004 7005 7006 7007 7008 7009 7010 7011 7012 7013 7014 7015 7016 7017 7018 7019 7020 7021 7022 7023 7024 7025 7026 7027 7028 7029 7030 7031 7032 7033 7034 7035 7036 7037 7038 7039 7040 7041 7042 7043 7044 7045 7046 7047 7048 7049 | for (i = 0; i < stack->len; i++) freeFunc(stack->vector[i]); } #define JIM_TT_NONE 0 #define JIM_TT_STR 1 #define JIM_TT_ESC 2 #define JIM_TT_VAR 3 #define JIM_TT_DICTSUGAR 4 #define JIM_TT_CMD 5 #define JIM_TT_SEP 6 #define JIM_TT_EOL 7 #define JIM_TT_EOF 8 #define JIM_TT_LINE 9 #define JIM_TT_WORD 10 #define JIM_TT_SUBEXPR_START 11 #define JIM_TT_SUBEXPR_END 12 #define JIM_TT_SUBEXPR_COMMA 13 #define JIM_TT_EXPR_INT 14 #define JIM_TT_EXPR_DOUBLE 15 #define JIM_TT_EXPR_BOOLEAN 16 #define JIM_TT_EXPRSUGAR 17 #define JIM_TT_EXPR_OP 20 #define TOKEN_IS_SEP(type) (type >= JIM_TT_SEP && type <= JIM_TT_EOF) #define TOKEN_IS_EXPR_START(type) (type == JIM_TT_NONE || type == JIM_TT_SUBEXPR_START || type == JIM_TT_SUBEXPR_COMMA) #define TOKEN_IS_EXPR_OP(type) (type >= JIM_TT_EXPR_OP) struct JimParseMissing { int ch; int line; }; struct JimParserCtx { const char *p; int len; int linenr; const char *tstart; const char *tend; int tline; int tt; int eof; int inquote; int comment; struct JimParseMissing missing; }; static int JimParseScript(struct JimParserCtx *pc); static int JimParseSep(struct JimParserCtx *pc); static int JimParseEol(struct JimParserCtx *pc); static int JimParseCmd(struct JimParserCtx *pc); static int JimParseQuote(struct JimParserCtx *pc); |
︙ | ︙ | |||
6862 6863 6864 6865 6866 6867 6868 | pc->p = prg; pc->len = len; pc->tstart = NULL; pc->tend = NULL; pc->tline = 0; pc->tt = JIM_TT_NONE; pc->eof = 0; | | | | | | | | 7060 7061 7062 7063 7064 7065 7066 7067 7068 7069 7070 7071 7072 7073 7074 7075 7076 7077 7078 7079 7080 7081 7082 7083 7084 7085 7086 7087 7088 7089 7090 7091 7092 7093 7094 7095 7096 7097 7098 7099 7100 7101 7102 7103 7104 7105 7106 7107 7108 7109 7110 7111 7112 7113 7114 7115 7116 7117 7118 7119 | pc->p = prg; pc->len = len; pc->tstart = NULL; pc->tend = NULL; pc->tline = 0; pc->tt = JIM_TT_NONE; pc->eof = 0; pc->inquote = 0; pc->linenr = linenr; pc->comment = 1; pc->missing.ch = ' '; pc->missing.line = linenr; } static int JimParseScript(struct JimParserCtx *pc) { while (1) { if (!pc->len) { pc->tstart = pc->p; pc->tend = pc->p - 1; pc->tline = pc->linenr; pc->tt = JIM_TT_EOL; pc->eof = 1; return JIM_OK; } switch (*(pc->p)) { case '\\': if (*(pc->p + 1) == '\n' && !pc->inquote) { return JimParseSep(pc); } pc->comment = 0; return JimParseStr(pc); case ' ': case '\t': case '\r': case '\f': if (!pc->inquote) return JimParseSep(pc); pc->comment = 0; return JimParseStr(pc); case '\n': case ';': pc->comment = 1; if (!pc->inquote) return JimParseEol(pc); return JimParseStr(pc); case '[': pc->comment = 0; return JimParseCmd(pc); case '$': pc->comment = 0; if (JimParseVar(pc) == JIM_ERR) { pc->tstart = pc->tend = pc->p++; pc->len--; pc->tt = JIM_TT_ESC; } return JIM_OK; case '#': if (pc->comment) { |
︙ | ︙ | |||
6968 6969 6970 6971 6972 6973 6974 | } static void JimParseSubBrace(struct JimParserCtx *pc) { int level = 1; | | | 7166 7167 7168 7169 7170 7171 7172 7173 7174 7175 7176 7177 7178 7179 7180 | } static void JimParseSubBrace(struct JimParserCtx *pc) { int level = 1; pc->p++; pc->len--; while (pc->len) { switch (*pc->p) { case '\\': if (pc->len > 1) { if (*++pc->p == '\n') { |
︙ | ︙ | |||
7012 7013 7014 7015 7016 7017 7018 | } static int JimParseSubQuote(struct JimParserCtx *pc) { int tt = JIM_TT_STR; int line = pc->tline; | | | 7210 7211 7212 7213 7214 7215 7216 7217 7218 7219 7220 7221 7222 7223 7224 | } static int JimParseSubQuote(struct JimParserCtx *pc) { int tt = JIM_TT_STR; int line = pc->tline; pc->p++; pc->len--; while (pc->len) { switch (*pc->p) { case '\\': if (pc->len > 1) { if (*++pc->p == '\n') { |
︙ | ︙ | |||
7061 7062 7063 7064 7065 7066 7067 | static void JimParseSubCmd(struct JimParserCtx *pc) { int level = 1; int startofword = 1; int line = pc->tline; | | | 7259 7260 7261 7262 7263 7264 7265 7266 7267 7268 7269 7270 7271 7272 7273 | static void JimParseSubCmd(struct JimParserCtx *pc) { int level = 1; int startofword = 1; int line = pc->tline; pc->p++; pc->len--; while (pc->len) { switch (*pc->p) { case '\\': if (pc->len > 1) { if (*++pc->p == '\n') { |
︙ | ︙ | |||
7141 7142 7143 7144 7145 7146 7147 | pc->tline = pc->linenr; pc->tt = JimParseSubQuote(pc); return JIM_OK; } static int JimParseVar(struct JimParserCtx *pc) { | | | | 7339 7340 7341 7342 7343 7344 7345 7346 7347 7348 7349 7350 7351 7352 7353 7354 7355 7356 7357 7358 7359 | pc->tline = pc->linenr; pc->tt = JimParseSubQuote(pc); return JIM_OK; } static int JimParseVar(struct JimParserCtx *pc) { pc->p++; pc->len--; #ifdef EXPRSUGAR_BRACKET if (*pc->p == '[') { JimParseCmd(pc); pc->tt = JIM_TT_EXPRSUGAR; return JIM_OK; } #endif pc->tstart = pc->p; |
︙ | ︙ | |||
7177 7178 7179 7180 7181 7182 7183 | if (pc->len) { pc->p++; pc->len--; } } else { while (1) { | | | | 7375 7376 7377 7378 7379 7380 7381 7382 7383 7384 7385 7386 7387 7388 7389 7390 7391 7392 7393 7394 7395 7396 7397 7398 7399 7400 7401 7402 7403 7404 | if (pc->len) { pc->p++; pc->len--; } } else { while (1) { if (pc->p[0] == ':' && pc->p[1] == ':') { while (*pc->p == ':') { pc->p++; pc->len--; } continue; } if (isalnum(UCHAR(*pc->p)) || *pc->p == '_' || UCHAR(*pc->p) >= 0x80) { pc->p++; pc->len--; continue; } break; } if (*pc->p == '(') { int count = 1; const char *paren = NULL; pc->tt = JIM_TT_DICTSUGAR; while (count && pc->len) { |
︙ | ︙ | |||
7219 7220 7221 7222 7223 7224 7225 | } } if (count == 0) { pc->p++; pc->len--; } else if (paren) { | | | 7417 7418 7419 7420 7421 7422 7423 7424 7425 7426 7427 7428 7429 7430 7431 | } } if (count == 0) { pc->p++; pc->len--; } else if (paren) { paren++; pc->len += (pc->p - paren); pc->p = paren; } #ifndef EXPRSUGAR_BRACKET if (*pc->tstart == '(') { pc->tt = JIM_TT_EXPRSUGAR; |
︙ | ︙ | |||
7244 7245 7246 7247 7248 7249 7250 | return JIM_OK; } static int JimParseStr(struct JimParserCtx *pc) { if (pc->tt == JIM_TT_SEP || pc->tt == JIM_TT_EOL || pc->tt == JIM_TT_NONE || pc->tt == JIM_TT_STR) { | | | | | | | | > | | | | | | | 7442 7443 7444 7445 7446 7447 7448 7449 7450 7451 7452 7453 7454 7455 7456 7457 7458 7459 7460 7461 7462 7463 7464 7465 7466 7467 7468 7469 7470 7471 7472 7473 7474 7475 7476 7477 7478 7479 7480 7481 7482 7483 7484 7485 7486 7487 7488 7489 7490 7491 7492 7493 7494 7495 7496 7497 7498 7499 7500 7501 7502 7503 7504 7505 7506 7507 7508 7509 7510 7511 7512 7513 7514 7515 7516 7517 7518 7519 7520 7521 7522 7523 7524 7525 7526 7527 7528 7529 7530 7531 7532 7533 7534 7535 7536 7537 7538 7539 7540 7541 7542 7543 7544 7545 7546 7547 7548 7549 7550 7551 7552 | return JIM_OK; } static int JimParseStr(struct JimParserCtx *pc) { if (pc->tt == JIM_TT_SEP || pc->tt == JIM_TT_EOL || pc->tt == JIM_TT_NONE || pc->tt == JIM_TT_STR) { if (*pc->p == '{') { return JimParseBrace(pc); } if (*pc->p == '"') { pc->inquote = 1; pc->p++; pc->len--; pc->missing.line = pc->tline; } } pc->tstart = pc->p; pc->tline = pc->linenr; while (1) { if (pc->len == 0) { if (pc->inquote) { pc->missing.ch = '"'; } pc->tend = pc->p - 1; pc->tt = JIM_TT_ESC; return JIM_OK; } switch (*pc->p) { case '\\': if (!pc->inquote && *(pc->p + 1) == '\n') { pc->tend = pc->p - 1; pc->tt = JIM_TT_ESC; return JIM_OK; } if (pc->len >= 2) { if (*(pc->p + 1) == '\n') { pc->linenr++; } pc->p++; pc->len--; } else if (pc->len == 1) { pc->missing.ch = '\\'; } break; case '(': if (pc->len > 1 && pc->p[1] != '$') { break; } case ')': if (*pc->p == '(' || pc->tt == JIM_TT_VAR) { if (pc->p == pc->tstart) { pc->p++; pc->len--; } pc->tend = pc->p - 1; pc->tt = JIM_TT_ESC; return JIM_OK; } break; case '$': case '[': pc->tend = pc->p - 1; pc->tt = JIM_TT_ESC; return JIM_OK; case ' ': case '\t': case '\n': case '\r': case '\f': case ';': if (!pc->inquote) { pc->tend = pc->p - 1; pc->tt = JIM_TT_ESC; return JIM_OK; } else if (*pc->p == '\n') { pc->linenr++; } break; case '"': if (pc->inquote) { pc->tend = pc->p - 1; pc->tt = JIM_TT_ESC; pc->p++; pc->len--; pc->inquote = 0; return JIM_OK; } break; } pc->p++; pc->len--; } return JIM_OK; } static int JimParseComment(struct JimParserCtx *pc) { while (*pc->p) { if (*pc->p == '\\') { pc->p++; |
︙ | ︙ | |||
7392 7393 7394 7395 7396 7397 7398 | } static int JimEscape(char *dest, const char *s, int slen) { char *p = dest; int i, len; | < < < | 7591 7592 7593 7594 7595 7596 7597 7598 7599 7600 7601 7602 7603 7604 | } static int JimEscape(char *dest, const char *s, int slen) { char *p = dest; int i, len; for (i = 0; i < slen; i++) { switch (s[i]) { case '\\': switch (s[i + 1]) { case 'a': *p++ = 0x7; i++; |
︙ | ︙ | |||
7453 7454 7455 7456 7457 7458 7459 | for (k = 0; k < maxchars; k++) { int c = xdigitval(s[i + k + 1]); if (c == -1) { break; } val = (val << 4) | c; } | | | | | | | | | 7649 7650 7651 7652 7653 7654 7655 7656 7657 7658 7659 7660 7661 7662 7663 7664 7665 7666 7667 7668 7669 7670 7671 7672 7673 7674 7675 7676 7677 7678 7679 7680 7681 7682 7683 7684 7685 7686 7687 7688 7689 7690 7691 7692 7693 7694 7695 7696 7697 7698 7699 7700 7701 7702 7703 7704 7705 7706 7707 7708 7709 7710 7711 7712 7713 | for (k = 0; k < maxchars; k++) { int c = xdigitval(s[i + k + 1]); if (c == -1) { break; } val = (val << 4) | c; } if (s[i] == '{') { if (k == 0 || val > 0x1fffff || s[i + k + 1] != '}') { i--; k = 0; } else { k++; } } if (k) { if (s[i] == 'x') { *p++ = val; } else { p += utf8_fromunicode(p, val); } i += k; break; } *p++ = s[i]; } break; case 'v': *p++ = 0xb; i++; break; case '\0': *p++ = '\\'; i++; break; case '\n': *p++ = ' '; do { i++; } while (s[i + 1] == ' ' || s[i + 1] == '\t'); break; case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': { int val = 0; int c = odigitval(s[i + 1]); val = c; c = odigitval(s[i + 2]); if (c == -1) { |
︙ | ︙ | |||
7560 7561 7562 7563 7564 7565 7566 | token = Jim_Alloc(1); token[0] = '\0'; } else { len = (end - start) + 1; token = Jim_Alloc(len + 1); if (pc->tt != JIM_TT_ESC) { | | | < < < < < < < < < < < < < < | 7756 7757 7758 7759 7760 7761 7762 7763 7764 7765 7766 7767 7768 7769 7770 7771 7772 7773 7774 7775 7776 7777 7778 7779 7780 7781 7782 | token = Jim_Alloc(1); token[0] = '\0'; } else { len = (end - start) + 1; token = Jim_Alloc(len + 1); if (pc->tt != JIM_TT_ESC) { memcpy(token, start, len); token[len] = '\0'; } else { len = JimEscape(token, start, len); } } return Jim_NewStringObjNoAlloc(interp, token, len); } static int JimParseListSep(struct JimParserCtx *pc); static int JimParseListStr(struct JimParserCtx *pc); static int JimParseListQuote(struct JimParserCtx *pc); static int JimParseList(struct JimParserCtx *pc) { if (isspace(UCHAR(*pc->p))) { |
︙ | ︙ | |||
7647 7648 7649 7650 7651 7652 7653 | pc->tt = JIM_TT_STR; while (pc->len) { switch (*pc->p) { case '\\': pc->tt = JIM_TT_ESC; if (--pc->len == 0) { | | | 7829 7830 7831 7832 7833 7834 7835 7836 7837 7838 7839 7840 7841 7842 7843 | pc->tt = JIM_TT_STR; while (pc->len) { switch (*pc->p) { case '\\': pc->tt = JIM_TT_ESC; if (--pc->len == 0) { pc->tend = pc->p; return JIM_OK; } pc->p++; break; case '\n': pc->linenr++; |
︙ | ︙ | |||
7683 7684 7685 7686 7687 7688 7689 | while (pc->len) { if (isspace(UCHAR(*pc->p))) { pc->tend = pc->p - 1; return JIM_OK; } if (*pc->p == '\\') { if (--pc->len == 0) { | | | | | | | | | | | | 7865 7866 7867 7868 7869 7870 7871 7872 7873 7874 7875 7876 7877 7878 7879 7880 7881 7882 7883 7884 7885 7886 7887 7888 7889 7890 7891 7892 7893 7894 7895 7896 7897 7898 7899 7900 7901 7902 7903 7904 7905 7906 7907 7908 7909 7910 7911 7912 7913 7914 7915 7916 7917 7918 7919 7920 7921 7922 7923 7924 7925 7926 7927 7928 7929 7930 7931 7932 7933 7934 7935 7936 7937 7938 7939 7940 7941 7942 7943 7944 7945 | while (pc->len) { if (isspace(UCHAR(*pc->p))) { pc->tend = pc->p - 1; return JIM_OK; } if (*pc->p == '\\') { if (--pc->len == 0) { pc->tend = pc->p; return JIM_OK; } pc->tt = JIM_TT_ESC; pc->p++; } pc->p++; pc->len--; } pc->tend = pc->p - 1; return JIM_OK; } Jim_Obj *Jim_NewObj(Jim_Interp *interp) { Jim_Obj *objPtr; if (interp->freeList != NULL) { objPtr = interp->freeList; interp->freeList = objPtr->nextObjPtr; } else { objPtr = Jim_Alloc(sizeof(*objPtr)); } objPtr->refCount = 0; objPtr->prevObjPtr = NULL; objPtr->nextObjPtr = interp->liveList; if (interp->liveList) interp->liveList->prevObjPtr = objPtr; interp->liveList = objPtr; return objPtr; } void Jim_FreeObj(Jim_Interp *interp, Jim_Obj *objPtr) { JimPanic((objPtr->refCount != 0, "!!!Object %p freed with bad refcount %d, type=%s", objPtr, objPtr->refCount, objPtr->typePtr ? objPtr->typePtr->name : "<none>")); Jim_FreeIntRep(interp, objPtr); if (objPtr->bytes != NULL) { if (objPtr->bytes != JimEmptyStringRep) Jim_Free(objPtr->bytes); } if (objPtr->prevObjPtr) objPtr->prevObjPtr->nextObjPtr = objPtr->nextObjPtr; if (objPtr->nextObjPtr) objPtr->nextObjPtr->prevObjPtr = objPtr->prevObjPtr; if (interp->liveList == objPtr) interp->liveList = objPtr->nextObjPtr; #ifdef JIM_DISABLE_OBJECT_POOL Jim_Free(objPtr); #else objPtr->prevObjPtr = NULL; objPtr->nextObjPtr = interp->freeList; if (interp->freeList) interp->freeList->prevObjPtr = objPtr; interp->freeList = objPtr; objPtr->refCount = -1; #endif |
︙ | ︙ | |||
7776 7777 7778 7779 7780 7781 7782 | Jim_Obj *Jim_DuplicateObj(Jim_Interp *interp, Jim_Obj *objPtr) { Jim_Obj *dupPtr; dupPtr = Jim_NewObj(interp); if (objPtr->bytes == NULL) { | | | | | | | | | | 7958 7959 7960 7961 7962 7963 7964 7965 7966 7967 7968 7969 7970 7971 7972 7973 7974 7975 7976 7977 7978 7979 7980 7981 7982 7983 7984 7985 7986 7987 7988 7989 7990 7991 7992 7993 7994 7995 7996 7997 7998 7999 8000 8001 8002 8003 8004 8005 8006 8007 8008 8009 8010 8011 8012 8013 8014 8015 8016 8017 8018 8019 8020 8021 8022 8023 8024 8025 8026 8027 8028 8029 8030 | Jim_Obj *Jim_DuplicateObj(Jim_Interp *interp, Jim_Obj *objPtr) { Jim_Obj *dupPtr; dupPtr = Jim_NewObj(interp); if (objPtr->bytes == NULL) { dupPtr->bytes = NULL; } else if (objPtr->length == 0) { dupPtr->bytes = JimEmptyStringRep; dupPtr->length = 0; dupPtr->typePtr = NULL; return dupPtr; } else { dupPtr->bytes = Jim_Alloc(objPtr->length + 1); dupPtr->length = objPtr->length; memcpy(dupPtr->bytes, objPtr->bytes, objPtr->length + 1); } dupPtr->typePtr = objPtr->typePtr; if (objPtr->typePtr != NULL) { if (objPtr->typePtr->dupIntRepProc == NULL) { dupPtr->internalRep = objPtr->internalRep; } else { objPtr->typePtr->dupIntRepProc(interp, objPtr, dupPtr); } } return dupPtr; } const char *Jim_GetString(Jim_Obj *objPtr, int *lenPtr) { if (objPtr->bytes == NULL) { JimPanic((objPtr->typePtr->updateStringProc == NULL, "UpdateStringProc called against '%s' type.", objPtr->typePtr->name)); objPtr->typePtr->updateStringProc(objPtr); } if (lenPtr) *lenPtr = objPtr->length; return objPtr->bytes; } int Jim_Length(Jim_Obj *objPtr) { if (objPtr->bytes == NULL) { JimPanic((objPtr->typePtr->updateStringProc == NULL, "UpdateStringProc called against '%s' type.", objPtr->typePtr->name)); objPtr->typePtr->updateStringProc(objPtr); } return objPtr->length; } const char *Jim_String(Jim_Obj *objPtr) { if (objPtr->bytes == NULL) { JimPanic((objPtr->typePtr == NULL, "UpdateStringProc called against typeless value.")); JimPanic((objPtr->typePtr->updateStringProc == NULL, "UpdateStringProc called against '%s' type.", objPtr->typePtr->name)); objPtr->typePtr->updateStringProc(objPtr); } return objPtr->bytes; } |
︙ | ︙ | |||
7894 7895 7896 7897 7898 7899 7900 | dupPtr->internalRep.strValue.maxLength = srcPtr->length; dupPtr->internalRep.strValue.charLength = srcPtr->internalRep.strValue.charLength; } static int SetStringFromAny(Jim_Interp *interp, Jim_Obj *objPtr) { if (objPtr->typePtr != &stringObjType) { | | | | | | | 8076 8077 8078 8079 8080 8081 8082 8083 8084 8085 8086 8087 8088 8089 8090 8091 8092 8093 8094 8095 8096 8097 8098 8099 8100 8101 | dupPtr->internalRep.strValue.maxLength = srcPtr->length; dupPtr->internalRep.strValue.charLength = srcPtr->internalRep.strValue.charLength; } static int SetStringFromAny(Jim_Interp *interp, Jim_Obj *objPtr) { if (objPtr->typePtr != &stringObjType) { if (objPtr->bytes == NULL) { JimPanic((objPtr->typePtr->updateStringProc == NULL, "UpdateStringProc called against '%s' type.", objPtr->typePtr->name)); objPtr->typePtr->updateStringProc(objPtr); } Jim_FreeIntRep(interp, objPtr); objPtr->typePtr = &stringObjType; objPtr->internalRep.strValue.maxLength = objPtr->length; objPtr->internalRep.strValue.charLength = -1; } return JIM_OK; } int Jim_Utf8Length(Jim_Interp *interp, Jim_Obj *objPtr) { |
︙ | ︙ | |||
7930 7931 7932 7933 7934 7935 7936 | } Jim_Obj *Jim_NewStringObj(Jim_Interp *interp, const char *s, int len) { Jim_Obj *objPtr = Jim_NewObj(interp); | | | | | | | 8112 8113 8114 8115 8116 8117 8118 8119 8120 8121 8122 8123 8124 8125 8126 8127 8128 8129 8130 8131 8132 8133 8134 8135 8136 8137 8138 8139 8140 8141 8142 8143 8144 8145 8146 8147 8148 8149 8150 8151 8152 8153 8154 | } Jim_Obj *Jim_NewStringObj(Jim_Interp *interp, const char *s, int len) { Jim_Obj *objPtr = Jim_NewObj(interp); if (len == -1) len = strlen(s); if (len == 0) { objPtr->bytes = JimEmptyStringRep; } else { objPtr->bytes = Jim_Alloc(len + 1); memcpy(objPtr->bytes, s, len); objPtr->bytes[len] = '\0'; } objPtr->length = len; objPtr->typePtr = NULL; return objPtr; } Jim_Obj *Jim_NewStringObjUtf8(Jim_Interp *interp, const char *s, int charlen) { #ifdef JIM_UTF8 int bytelen = utf8_index(s, charlen); Jim_Obj *objPtr = Jim_NewStringObj(interp, s, bytelen); objPtr->typePtr = &stringObjType; objPtr->internalRep.strValue.maxLength = bytelen; objPtr->internalRep.strValue.charLength = charlen; return objPtr; #else return Jim_NewStringObj(interp, s, charlen); |
︙ | ︙ | |||
7989 7990 7991 7992 7993 7994 7995 | if (len == -1) len = strlen(str); needlen = objPtr->length + len; if (objPtr->internalRep.strValue.maxLength < needlen || objPtr->internalRep.strValue.maxLength == 0) { needlen *= 2; | | | | 8171 8172 8173 8174 8175 8176 8177 8178 8179 8180 8181 8182 8183 8184 8185 8186 8187 8188 8189 8190 8191 8192 8193 8194 8195 8196 8197 8198 8199 8200 8201 | if (len == -1) len = strlen(str); needlen = objPtr->length + len; if (objPtr->internalRep.strValue.maxLength < needlen || objPtr->internalRep.strValue.maxLength == 0) { needlen *= 2; if (needlen < 7) { needlen = 7; } if (objPtr->bytes == JimEmptyStringRep) { objPtr->bytes = Jim_Alloc(needlen + 1); } else { objPtr->bytes = Jim_Realloc(objPtr->bytes, needlen + 1); } objPtr->internalRep.strValue.maxLength = needlen; } memcpy(objPtr->bytes + objPtr->length, str, len); objPtr->bytes[objPtr->length + len] = '\0'; if (objPtr->internalRep.strValue.charLength >= 0) { objPtr->internalRep.strValue.charLength += utf8_strlen(objPtr->bytes + objPtr->length, len); } objPtr->length += len; } void Jim_AppendString(Jim_Interp *interp, Jim_Obj *objPtr, const char *str, int len) { |
︙ | ︙ | |||
8067 8068 8069 8070 8071 8072 8073 | int Jim_StringCompareObj(Jim_Interp *interp, Jim_Obj *firstObjPtr, Jim_Obj *secondObjPtr, int nocase) { int l1, l2; const char *s1 = Jim_GetString(firstObjPtr, &l1); const char *s2 = Jim_GetString(secondObjPtr, &l2); if (nocase) { | | | 8249 8250 8251 8252 8253 8254 8255 8256 8257 8258 8259 8260 8261 8262 8263 | int Jim_StringCompareObj(Jim_Interp *interp, Jim_Obj *firstObjPtr, Jim_Obj *secondObjPtr, int nocase) { int l1, l2; const char *s1 = Jim_GetString(firstObjPtr, &l1); const char *s2 = Jim_GetString(secondObjPtr, &l2); if (nocase) { return JimStringCompareLen(s1, s2, -1, nocase); } return JimStringCompare(s1, l1, s2, l2); } int Jim_StringCompareLenObj(Jim_Interp *interp, Jim_Obj *firstObjPtr, Jim_Obj *secondObjPtr, int nocase) { |
︙ | ︙ | |||
8169 8170 8171 8172 8173 8174 8175 | return NULL; } if (first == 0 && rangeLen == len) { return strObjPtr; } if (len == bytelen) { | | | 8351 8352 8353 8354 8355 8356 8357 8358 8359 8360 8361 8362 8363 8364 8365 | return NULL; } if (first == 0 && rangeLen == len) { return strObjPtr; } if (len == bytelen) { return Jim_NewStringObj(interp, str + first, rangeLen); } return Jim_NewStringObjUtf8(interp, str + utf8_index(str, first), rangeLen); #else return Jim_StringByteRangeObj(interp, strObjPtr, firstObjPtr, lastObjPtr); #endif } |
︙ | ︙ | |||
8198 8199 8200 8201 8202 8203 8204 | if (last < first) { return strObjPtr; } str = Jim_String(strObjPtr); | | | | | 8380 8381 8382 8383 8384 8385 8386 8387 8388 8389 8390 8391 8392 8393 8394 8395 8396 8397 8398 8399 8400 8401 8402 | if (last < first) { return strObjPtr; } str = Jim_String(strObjPtr); objPtr = Jim_NewStringObjUtf8(interp, str, first); if (newStrObj) { Jim_AppendObj(interp, objPtr, newStrObj); } Jim_AppendString(interp, objPtr, str + utf8_index(str, last + 1), len - last - 1); return objPtr; } static void JimStrCopyUpperLower(char *dest, const char *str, int uc) { |
︙ | ︙ | |||
8309 8310 8311 8312 8313 8314 8315 | static const char *JimFindTrimLeft(const char *str, int len, const char *trimchars, int trimlen) { while (len) { int c; int n = utf8_tounicode(str, &c); if (utf8_memchr(trimchars, trimlen, c) == NULL) { | | | 8491 8492 8493 8494 8495 8496 8497 8498 8499 8500 8501 8502 8503 8504 8505 | static const char *JimFindTrimLeft(const char *str, int len, const char *trimchars, int trimlen) { while (len) { int c; int n = utf8_tounicode(str, &c); if (utf8_memchr(trimchars, trimlen, c) == NULL) { break; } str += n; len -= n; } return str; } |
︙ | ︙ | |||
8380 8381 8382 8383 8384 8385 8386 | SetStringFromAny(interp, strObjPtr); len = Jim_Length(strObjPtr); nontrim = JimFindTrimRight(strObjPtr->bytes, len, trimchars, trimcharslen); if (nontrim == NULL) { | | | | | | | | | 8562 8563 8564 8565 8566 8567 8568 8569 8570 8571 8572 8573 8574 8575 8576 8577 8578 8579 8580 8581 8582 8583 8584 8585 8586 8587 8588 8589 8590 8591 8592 8593 8594 8595 8596 8597 8598 8599 8600 8601 8602 8603 8604 8605 8606 | SetStringFromAny(interp, strObjPtr); len = Jim_Length(strObjPtr); nontrim = JimFindTrimRight(strObjPtr->bytes, len, trimchars, trimcharslen); if (nontrim == NULL) { return Jim_NewEmptyStringObj(interp); } if (nontrim == strObjPtr->bytes + len) { return strObjPtr; } if (Jim_IsShared(strObjPtr)) { strObjPtr = Jim_NewStringObj(interp, strObjPtr->bytes, (nontrim - strObjPtr->bytes)); } else { strObjPtr->bytes[nontrim - strObjPtr->bytes] = 0; strObjPtr->length = (nontrim - strObjPtr->bytes); } return strObjPtr; } static Jim_Obj *JimStringTrim(Jim_Interp *interp, Jim_Obj *strObjPtr, Jim_Obj *trimcharsObjPtr) { Jim_Obj *objPtr = JimStringTrimLeft(interp, strObjPtr, trimcharsObjPtr); strObjPtr = JimStringTrimRight(interp, objPtr, trimcharsObjPtr); if (objPtr != strObjPtr && objPtr->refCount == 0) { Jim_FreeNewObj(interp, objPtr); } return strObjPtr; } |
︙ | ︙ | |||
8432 8433 8434 8435 8436 8437 8438 | #endif static int JimStringIs(Jim_Interp *interp, Jim_Obj *strObjPtr, Jim_Obj *strClass, int strict) { static const char * const strclassnames[] = { "integer", "alpha", "alnum", "ascii", "digit", "double", "lower", "upper", "space", "xdigit", | | | | 8614 8615 8616 8617 8618 8619 8620 8621 8622 8623 8624 8625 8626 8627 8628 8629 8630 8631 8632 8633 8634 | #endif static int JimStringIs(Jim_Interp *interp, Jim_Obj *strObjPtr, Jim_Obj *strClass, int strict) { static const char * const strclassnames[] = { "integer", "alpha", "alnum", "ascii", "digit", "double", "lower", "upper", "space", "xdigit", "control", "print", "graph", "punct", "boolean", NULL }; enum { STR_IS_INTEGER, STR_IS_ALPHA, STR_IS_ALNUM, STR_IS_ASCII, STR_IS_DIGIT, STR_IS_DOUBLE, STR_IS_LOWER, STR_IS_UPPER, STR_IS_SPACE, STR_IS_XDIGIT, STR_IS_CONTROL, STR_IS_PRINT, STR_IS_GRAPH, STR_IS_PUNCT, STR_IS_BOOLEAN, }; int strclass; int len; int i; const char *str; int (*isclassfunc)(int c) = NULL; |
︙ | ︙ | |||
8470 8471 8472 8473 8474 8475 8476 8477 8478 8479 8480 8481 8482 8483 | case STR_IS_DOUBLE: { double d; Jim_SetResultBool(interp, Jim_GetDouble(interp, strObjPtr, &d) == JIM_OK && errno != ERANGE); return JIM_OK; } case STR_IS_ALPHA: isclassfunc = isalpha; break; case STR_IS_ALNUM: isclassfunc = isalnum; break; case STR_IS_ASCII: isclassfunc = jim_isascii; break; case STR_IS_DIGIT: isclassfunc = isdigit; break; case STR_IS_LOWER: isclassfunc = islower; break; case STR_IS_UPPER: isclassfunc = isupper; break; | > > > > > > > | 8652 8653 8654 8655 8656 8657 8658 8659 8660 8661 8662 8663 8664 8665 8666 8667 8668 8669 8670 8671 8672 | case STR_IS_DOUBLE: { double d; Jim_SetResultBool(interp, Jim_GetDouble(interp, strObjPtr, &d) == JIM_OK && errno != ERANGE); return JIM_OK; } case STR_IS_BOOLEAN: { int b; Jim_SetResultBool(interp, Jim_GetBoolean(interp, strObjPtr, &b) == JIM_OK); return JIM_OK; } case STR_IS_ALPHA: isclassfunc = isalpha; break; case STR_IS_ALNUM: isclassfunc = isalnum; break; case STR_IS_ASCII: isclassfunc = jim_isascii; break; case STR_IS_DIGIT: isclassfunc = isdigit; break; case STR_IS_LOWER: isclassfunc = islower; break; case STR_IS_UPPER: isclassfunc = isupper; break; |
︙ | ︙ | |||
8522 8523 8524 8525 8526 8527 8528 | if (strcmp(str, objStr) != 0) return 0; if (objPtr->typePtr != &comparedStringObjType) { Jim_FreeIntRep(interp, objPtr); objPtr->typePtr = &comparedStringObjType; } | | | 8711 8712 8713 8714 8715 8716 8717 8718 8719 8720 8721 8722 8723 8724 8725 | if (strcmp(str, objStr) != 0) return 0; if (objPtr->typePtr != &comparedStringObjType) { Jim_FreeIntRep(interp, objPtr); objPtr->typePtr = &comparedStringObjType; } objPtr->internalRep.ptr = (char *)str; return 1; } } static int qsortCompareStringPointers(const void *a, const void *b) { char *const *sa = (char *const *)a; |
︙ | ︙ | |||
8598 8599 8600 8601 8602 8603 8604 | objPtr->internalRep.scriptLineValue.line = line; return objPtr; } static void FreeScriptInternalRep(Jim_Interp *interp, Jim_Obj *objPtr); static void DupScriptInternalRep(Jim_Interp *interp, Jim_Obj *srcPtr, Jim_Obj *dupPtr); | < < | | | | | | | > > > > | 8787 8788 8789 8790 8791 8792 8793 8794 8795 8796 8797 8798 8799 8800 8801 8802 8803 8804 8805 8806 8807 8808 8809 8810 8811 8812 8813 8814 8815 8816 8817 8818 8819 8820 8821 8822 8823 8824 8825 8826 8827 8828 8829 8830 8831 8832 | objPtr->internalRep.scriptLineValue.line = line; return objPtr; } static void FreeScriptInternalRep(Jim_Interp *interp, Jim_Obj *objPtr); static void DupScriptInternalRep(Jim_Interp *interp, Jim_Obj *srcPtr, Jim_Obj *dupPtr); static const Jim_ObjType scriptObjType = { "script", FreeScriptInternalRep, DupScriptInternalRep, NULL, JIM_TYPE_REFERENCES, }; typedef struct ScriptToken { Jim_Obj *objPtr; int type; } ScriptToken; typedef struct ScriptObj { ScriptToken *token; Jim_Obj *fileNameObj; int len; int substFlags; int inUse; /* Used to share a ScriptObj. Currently only used by Jim_EvalObj() as protection against shimmering of the currently evaluated object. */ int firstline; int linenr; int missing; } ScriptObj; static void JimSetScriptFromAny(Jim_Interp *interp, struct Jim_Obj *objPtr); static int JimParseCheckMissing(Jim_Interp *interp, int ch); static ScriptObj *JimGetScript(Jim_Interp *interp, Jim_Obj *objPtr); void FreeScriptInternalRep(Jim_Interp *interp, Jim_Obj *objPtr) { int i; struct ScriptObj *script = (void *)objPtr->internalRep.ptr; if (--script->inUse != 0) |
︙ | ︙ | |||
8654 8655 8656 8657 8658 8659 8660 | JIM_NOTUSED(srcPtr); dupPtr->typePtr = NULL; } typedef struct { | | | | | | | | | | | 8845 8846 8847 8848 8849 8850 8851 8852 8853 8854 8855 8856 8857 8858 8859 8860 8861 8862 8863 8864 8865 8866 8867 8868 8869 8870 8871 | JIM_NOTUSED(srcPtr); dupPtr->typePtr = NULL; } typedef struct { const char *token; int len; int type; int line; } ParseToken; typedef struct { ParseToken *list; int size; int count; ParseToken static_list[20]; } ParseTokenList; static void ScriptTokenListInit(ParseTokenList *tokenlist) { tokenlist->list = tokenlist->static_list; tokenlist->size = sizeof(tokenlist->static_list) / sizeof(ParseToken); tokenlist->count = 0; |
︙ | ︙ | |||
8689 8690 8691 8692 8693 8694 8695 | static void ScriptAddToken(ParseTokenList *tokenlist, const char *token, int len, int type, int line) { ParseToken *t; if (tokenlist->count == tokenlist->size) { | | | | | | | | | | | | | | | | | | | | 8880 8881 8882 8883 8884 8885 8886 8887 8888 8889 8890 8891 8892 8893 8894 8895 8896 8897 8898 8899 8900 8901 8902 8903 8904 8905 8906 8907 8908 8909 8910 8911 8912 8913 8914 8915 8916 8917 8918 8919 8920 8921 8922 8923 8924 8925 8926 8927 8928 8929 8930 8931 8932 8933 8934 8935 8936 8937 8938 8939 8940 8941 8942 8943 8944 8945 8946 8947 8948 8949 8950 8951 8952 8953 8954 8955 8956 8957 8958 8959 8960 8961 8962 8963 8964 8965 8966 8967 8968 8969 8970 8971 8972 8973 8974 8975 8976 8977 8978 8979 8980 8981 8982 8983 8984 8985 8986 8987 8988 8989 8990 8991 8992 8993 8994 8995 8996 8997 8998 8999 9000 9001 9002 9003 9004 9005 9006 9007 9008 9009 9010 9011 9012 9013 9014 9015 9016 9017 9018 9019 9020 9021 9022 9023 9024 9025 9026 9027 9028 9029 9030 9031 9032 9033 | static void ScriptAddToken(ParseTokenList *tokenlist, const char *token, int len, int type, int line) { ParseToken *t; if (tokenlist->count == tokenlist->size) { tokenlist->size *= 2; if (tokenlist->list != tokenlist->static_list) { tokenlist->list = Jim_Realloc(tokenlist->list, tokenlist->size * sizeof(*tokenlist->list)); } else { tokenlist->list = Jim_Alloc(tokenlist->size * sizeof(*tokenlist->list)); memcpy(tokenlist->list, tokenlist->static_list, tokenlist->count * sizeof(*tokenlist->list)); } } t = &tokenlist->list[tokenlist->count++]; t->token = token; t->len = len; t->type = type; t->line = line; } static int JimCountWordTokens(ParseToken *t) { int expand = 1; int count = 0; if (t->type == JIM_TT_STR && !TOKEN_IS_SEP(t[1].type)) { if ((t->len == 1 && *t->token == '*') || (t->len == 6 && strncmp(t->token, "expand", 6) == 0)) { expand = -1; t++; } } while (!TOKEN_IS_SEP(t->type)) { t++; count++; } return count * expand; } static Jim_Obj *JimMakeScriptObj(Jim_Interp *interp, const ParseToken *t) { Jim_Obj *objPtr; if (t->type == JIM_TT_ESC && memchr(t->token, '\\', t->len) != NULL) { int len = t->len; char *str = Jim_Alloc(len + 1); len = JimEscape(str, t->token, len); objPtr = Jim_NewStringObjNoAlloc(interp, str, len); } else { objPtr = Jim_NewStringObj(interp, t->token, t->len); } return objPtr; } static void ScriptObjAddTokens(Jim_Interp *interp, struct ScriptObj *script, ParseTokenList *tokenlist) { int i; struct ScriptToken *token; int lineargs = 0; ScriptToken *linefirst; int count; int linenr; #ifdef DEBUG_SHOW_SCRIPT_TOKENS printf("==== Tokens ====\n"); for (i = 0; i < tokenlist->count; i++) { printf("[%2d]@%d %s '%.*s'\n", i, tokenlist->list[i].line, jim_tt_name(tokenlist->list[i].type), tokenlist->list[i].len, tokenlist->list[i].token); } #endif count = tokenlist->count; for (i = 0; i < tokenlist->count; i++) { if (tokenlist->list[i].type == JIM_TT_EOL) { count++; } } linenr = script->firstline = tokenlist->list[0].line; token = script->token = Jim_Alloc(sizeof(ScriptToken) * count); linefirst = token++; for (i = 0; i < tokenlist->count; ) { int wordtokens; while (tokenlist->list[i].type == JIM_TT_SEP) { i++; } wordtokens = JimCountWordTokens(tokenlist->list + i); if (wordtokens == 0) { if (lineargs) { linefirst->type = JIM_TT_LINE; linefirst->objPtr = JimNewScriptLineObj(interp, lineargs, linenr); Jim_IncrRefCount(linefirst->objPtr); lineargs = 0; linefirst = token++; } i++; continue; } else if (wordtokens != 1) { token->type = JIM_TT_WORD; token->objPtr = Jim_NewIntObj(interp, wordtokens); Jim_IncrRefCount(token->objPtr); token++; if (wordtokens < 0) { i++; wordtokens = -wordtokens - 1; lineargs--; } } if (lineargs == 0) { linenr = tokenlist->list[i].line; } lineargs++; while (wordtokens--) { const ParseToken *t = &tokenlist->list[i++]; token->type = t->type; token->objPtr = JimMakeScriptObj(interp, t); Jim_IncrRefCount(token->objPtr); |
︙ | ︙ | |||
8858 8859 8860 8861 8862 8863 8864 8865 8866 8867 8868 8869 8870 8871 | for (i = 0; i < script->len; i++) { const ScriptToken *t = &script->token[i]; printf("[%2d] %s %s\n", i, jim_tt_name(t->type), Jim_String(t->objPtr)); } #endif } static int JimParseCheckMissing(Jim_Interp *interp, int ch) { const char *msg; switch (ch) { case '\\': | > > > > > > > > > | 9049 9050 9051 9052 9053 9054 9055 9056 9057 9058 9059 9060 9061 9062 9063 9064 9065 9066 9067 9068 9069 9070 9071 | for (i = 0; i < script->len; i++) { const ScriptToken *t = &script->token[i]; printf("[%2d] %s %s\n", i, jim_tt_name(t->type), Jim_String(t->objPtr)); } #endif } int Jim_ScriptIsComplete(Jim_Interp *interp, Jim_Obj *scriptObj, char *stateCharPtr) { ScriptObj *script = JimGetScript(interp, scriptObj); if (stateCharPtr) { *stateCharPtr = script->missing; } return (script->missing == ' '); } static int JimParseCheckMissing(Jim_Interp *interp, int ch) { const char *msg; switch (ch) { case '\\': |
︙ | ︙ | |||
8895 8896 8897 8898 8899 8900 8901 | struct ScriptToken *token; token = script->token = Jim_Alloc(sizeof(ScriptToken) * tokenlist->count); for (i = 0; i < tokenlist->count; i++) { const ParseToken *t = &tokenlist->list[i]; | | | | | | | | | | | 9095 9096 9097 9098 9099 9100 9101 9102 9103 9104 9105 9106 9107 9108 9109 9110 9111 9112 9113 9114 9115 9116 9117 9118 9119 9120 9121 9122 9123 9124 9125 9126 9127 9128 9129 9130 9131 9132 9133 9134 9135 9136 9137 9138 9139 9140 9141 9142 9143 9144 9145 9146 9147 9148 9149 9150 9151 9152 9153 9154 9155 9156 9157 9158 9159 9160 9161 9162 9163 9164 9165 9166 9167 9168 9169 9170 9171 9172 9173 9174 9175 9176 | struct ScriptToken *token; token = script->token = Jim_Alloc(sizeof(ScriptToken) * tokenlist->count); for (i = 0; i < tokenlist->count; i++) { const ParseToken *t = &tokenlist->list[i]; token->type = t->type; token->objPtr = JimMakeScriptObj(interp, t); Jim_IncrRefCount(token->objPtr); token++; } script->len = i; } static void JimSetScriptFromAny(Jim_Interp *interp, struct Jim_Obj *objPtr) { int scriptTextLen; const char *scriptText = Jim_GetString(objPtr, &scriptTextLen); struct JimParserCtx parser; struct ScriptObj *script; ParseTokenList tokenlist; int line = 1; if (objPtr->typePtr == &sourceObjType) { line = objPtr->internalRep.sourceValue.lineNumber; } ScriptTokenListInit(&tokenlist); JimParserInit(&parser, scriptText, scriptTextLen, line); while (!parser.eof) { JimParseScript(&parser); ScriptAddToken(&tokenlist, parser.tstart, parser.tend - parser.tstart + 1, parser.tt, parser.tline); } ScriptAddToken(&tokenlist, scriptText + scriptTextLen, 0, JIM_TT_EOF, 0); script = Jim_Alloc(sizeof(*script)); memset(script, 0, sizeof(*script)); script->inUse = 1; if (objPtr->typePtr == &sourceObjType) { script->fileNameObj = objPtr->internalRep.sourceValue.fileNameObj; } else { script->fileNameObj = interp->emptyObj; } Jim_IncrRefCount(script->fileNameObj); script->missing = parser.missing.ch; script->linenr = parser.missing.line; ScriptObjAddTokens(interp, script, &tokenlist); ScriptTokenListFree(&tokenlist); Jim_FreeIntRep(interp, objPtr); Jim_SetIntRepPtr(objPtr, script); objPtr->typePtr = &scriptObjType; } static void JimAddErrorToStack(Jim_Interp *interp, ScriptObj *script); static ScriptObj *JimGetScript(Jim_Interp *interp, Jim_Obj *objPtr) { if (objPtr == interp->emptyObj) { objPtr = interp->nullScriptObj; } if (objPtr->typePtr != &scriptObjType || ((struct ScriptObj *)Jim_GetIntRepPtr(objPtr))->substFlags) { JimSetScriptFromAny(interp, objPtr); } |
︙ | ︙ | |||
9001 9002 9003 9004 9005 9006 9007 | Jim_DecrRefCount(interp, cmdPtr->u.proc.nsObj); if (cmdPtr->u.proc.staticVars) { Jim_FreeHashTable(cmdPtr->u.proc.staticVars); Jim_Free(cmdPtr->u.proc.staticVars); } } else { | | | | | | | | | | | | | | | | | | 9201 9202 9203 9204 9205 9206 9207 9208 9209 9210 9211 9212 9213 9214 9215 9216 9217 9218 9219 9220 9221 9222 9223 9224 9225 9226 9227 9228 9229 9230 9231 9232 9233 9234 9235 9236 9237 9238 9239 9240 9241 9242 9243 9244 9245 9246 9247 9248 9249 9250 9251 9252 9253 9254 9255 9256 9257 9258 9259 9260 9261 9262 9263 9264 9265 9266 9267 9268 9269 9270 9271 | Jim_DecrRefCount(interp, cmdPtr->u.proc.nsObj); if (cmdPtr->u.proc.staticVars) { Jim_FreeHashTable(cmdPtr->u.proc.staticVars); Jim_Free(cmdPtr->u.proc.staticVars); } } else { if (cmdPtr->u.native.delProc) { cmdPtr->u.native.delProc(interp, cmdPtr->u.native.privData); } } if (cmdPtr->prevCmd) { JimDecrCmdRefCount(interp, cmdPtr->prevCmd); } Jim_Free(cmdPtr); } } static void JimVariablesHTValDestructor(void *interp, void *val) { Jim_DecrRefCount(interp, ((Jim_Var *)val)->objPtr); Jim_Free(val); } static const Jim_HashTableType JimVariablesHashTableType = { JimStringCopyHTHashFunction, JimStringCopyHTDup, NULL, JimStringCopyHTKeyCompare, JimStringCopyHTKeyDestructor, JimVariablesHTValDestructor }; static void JimCommandsHT_ValDestructor(void *interp, void *val) { JimDecrCmdRefCount(interp, val); } static const Jim_HashTableType JimCommandsHashTableType = { JimStringCopyHTHashFunction, JimStringCopyHTDup, NULL, JimStringCopyHTKeyCompare, JimStringCopyHTKeyDestructor, JimCommandsHT_ValDestructor }; #ifdef jim_ext_namespace static Jim_Obj *JimQualifyNameObj(Jim_Interp *interp, Jim_Obj *nsObj) { const char *name = Jim_String(nsObj); if (name[0] == ':' && name[1] == ':') { while (*++name == ':') { } nsObj = Jim_NewStringObj(interp, name, -1); } else if (Jim_Length(interp->framePtr->nsObj)) { nsObj = Jim_DuplicateObj(interp, interp->framePtr->nsObj); Jim_AppendStrings(interp, nsObj, "::", name, NULL); } return nsObj; } Jim_Obj *Jim_MakeGlobalNamespaceName(Jim_Interp *interp, Jim_Obj *nameObjPtr) |
︙ | ︙ | |||
9085 9086 9087 9088 9089 9090 9091 | } static const char *JimQualifyName(Jim_Interp *interp, const char *name, Jim_Obj **objPtrPtr) { Jim_Obj *objPtr = interp->emptyObj; if (name[0] == ':' && name[1] == ':') { | | | | | | | | | 9285 9286 9287 9288 9289 9290 9291 9292 9293 9294 9295 9296 9297 9298 9299 9300 9301 9302 9303 9304 9305 9306 9307 9308 9309 9310 9311 9312 9313 9314 9315 9316 9317 9318 9319 9320 9321 9322 9323 9324 9325 9326 9327 9328 9329 9330 9331 9332 9333 9334 9335 9336 9337 9338 9339 9340 9341 9342 9343 9344 9345 9346 9347 9348 9349 9350 9351 9352 9353 9354 9355 9356 9357 | } static const char *JimQualifyName(Jim_Interp *interp, const char *name, Jim_Obj **objPtrPtr) { Jim_Obj *objPtr = interp->emptyObj; if (name[0] == ':' && name[1] == ':') { while (*++name == ':') { } } else if (Jim_Length(interp->framePtr->nsObj)) { objPtr = Jim_DuplicateObj(interp, interp->framePtr->nsObj); Jim_AppendStrings(interp, objPtr, "::", name, NULL); name = Jim_String(objPtr); } Jim_IncrRefCount(objPtr); *objPtrPtr = objPtr; return name; } #define JimFreeQualifiedName(INTERP, OBJ) Jim_DecrRefCount((INTERP), (OBJ)) #else #define JimQualifyName(INTERP, NAME, DUMMY) (((NAME)[0] == ':' && (NAME)[1] == ':') ? (NAME) + 2 : (NAME)) #define JimFreeQualifiedName(INTERP, DUMMY) (void)(DUMMY) Jim_Obj *Jim_MakeGlobalNamespaceName(Jim_Interp *interp, Jim_Obj *nameObjPtr) { return nameObjPtr; } #endif static int JimCreateCommand(Jim_Interp *interp, const char *name, Jim_Cmd *cmd) { Jim_HashEntry *he = Jim_FindHashEntry(&interp->commands, name); if (he) { Jim_InterpIncrProcEpoch(interp); } if (he && interp->local) { cmd->prevCmd = Jim_GetHashEntryVal(he); Jim_SetHashVal(&interp->commands, he, cmd); } else { if (he) { Jim_DeleteHashEntry(&interp->commands, name); } Jim_AddHashEntry(&interp->commands, name, cmd); } return JIM_OK; } int Jim_CreateCommand(Jim_Interp *interp, const char *cmdNameStr, Jim_CmdProc *cmdProc, void *privData, Jim_DelCmdProc *delProc) { Jim_Cmd *cmdPtr = Jim_Alloc(sizeof(*cmdPtr)); memset(cmdPtr, 0, sizeof(*cmdPtr)); cmdPtr->inUse = 1; cmdPtr->u.native.delProc = delProc; cmdPtr->u.native.cmdProc = cmdProc; cmdPtr->u.native.privData = privData; JimCreateCommand(interp, cmdNameStr, cmdPtr); |
︙ | ︙ | |||
9172 9173 9174 9175 9176 9177 9178 | Jim_InitHashTable(cmdPtr->u.proc.staticVars, &JimVariablesHashTableType, interp); for (i = 0; i < len; i++) { Jim_Obj *objPtr, *initObjPtr, *nameObjPtr; Jim_Var *varPtr; int subLen; objPtr = Jim_ListGetIndex(interp, staticsListObjPtr, i); | | | 9372 9373 9374 9375 9376 9377 9378 9379 9380 9381 9382 9383 9384 9385 9386 | Jim_InitHashTable(cmdPtr->u.proc.staticVars, &JimVariablesHashTableType, interp); for (i = 0; i < len; i++) { Jim_Obj *objPtr, *initObjPtr, *nameObjPtr; Jim_Var *varPtr; int subLen; objPtr = Jim_ListGetIndex(interp, staticsListObjPtr, i); subLen = Jim_ListLength(interp, objPtr); if (subLen == 1 || subLen == 2) { nameObjPtr = Jim_ListGetIndex(interp, objPtr, 0); if (subLen == 1) { initObjPtr = Jim_GetVariable(interp, nameObjPtr, JIM_NONE); if (initObjPtr == NULL) { Jim_SetResultFormatted(interp, |
︙ | ︙ | |||
9218 9219 9220 9221 9222 9223 9224 | return JIM_OK; } static void JimUpdateProcNamespace(Jim_Interp *interp, Jim_Cmd *cmdPtr, const char *cmdname) { #ifdef jim_ext_namespace if (cmdPtr->isproc) { | | | | | | | | | | | 9418 9419 9420 9421 9422 9423 9424 9425 9426 9427 9428 9429 9430 9431 9432 9433 9434 9435 9436 9437 9438 9439 9440 9441 9442 9443 9444 9445 9446 9447 9448 9449 9450 9451 9452 9453 9454 9455 9456 9457 9458 9459 9460 9461 9462 9463 9464 9465 9466 9467 9468 9469 9470 9471 9472 9473 9474 9475 9476 9477 9478 9479 9480 9481 9482 9483 9484 9485 9486 9487 9488 9489 9490 9491 9492 9493 9494 9495 9496 9497 9498 9499 9500 9501 9502 9503 9504 9505 | return JIM_OK; } static void JimUpdateProcNamespace(Jim_Interp *interp, Jim_Cmd *cmdPtr, const char *cmdname) { #ifdef jim_ext_namespace if (cmdPtr->isproc) { const char *pt = strrchr(cmdname, ':'); if (pt && pt != cmdname && pt[-1] == ':') { Jim_DecrRefCount(interp, cmdPtr->u.proc.nsObj); cmdPtr->u.proc.nsObj = Jim_NewStringObj(interp, cmdname, pt - cmdname - 1); Jim_IncrRefCount(cmdPtr->u.proc.nsObj); if (Jim_FindHashEntry(&interp->commands, pt + 1)) { Jim_InterpIncrProcEpoch(interp); } } } #endif } static Jim_Cmd *JimCreateProcedureCmd(Jim_Interp *interp, Jim_Obj *argListObjPtr, Jim_Obj *staticsListObjPtr, Jim_Obj *bodyObjPtr, Jim_Obj *nsObj) { Jim_Cmd *cmdPtr; int argListLen; int i; argListLen = Jim_ListLength(interp, argListObjPtr); cmdPtr = Jim_Alloc(sizeof(*cmdPtr) + sizeof(struct Jim_ProcArg) * argListLen); memset(cmdPtr, 0, sizeof(*cmdPtr)); cmdPtr->inUse = 1; cmdPtr->isproc = 1; cmdPtr->u.proc.argListObjPtr = argListObjPtr; cmdPtr->u.proc.argListLen = argListLen; cmdPtr->u.proc.bodyObjPtr = bodyObjPtr; cmdPtr->u.proc.argsPos = -1; cmdPtr->u.proc.arglist = (struct Jim_ProcArg *)(cmdPtr + 1); cmdPtr->u.proc.nsObj = nsObj ? nsObj : interp->emptyObj; Jim_IncrRefCount(argListObjPtr); Jim_IncrRefCount(bodyObjPtr); Jim_IncrRefCount(cmdPtr->u.proc.nsObj); if (staticsListObjPtr && JimCreateProcedureStatics(interp, cmdPtr, staticsListObjPtr) != JIM_OK) { goto err; } for (i = 0; i < argListLen; i++) { Jim_Obj *argPtr; Jim_Obj *nameObjPtr; Jim_Obj *defaultObjPtr; int len; argPtr = Jim_ListGetIndex(interp, argListObjPtr, i); len = Jim_ListLength(interp, argPtr); if (len == 0) { Jim_SetResultString(interp, "argument with no name", -1); err: JimDecrCmdRefCount(interp, cmdPtr); return NULL; } if (len > 2) { Jim_SetResultFormatted(interp, "too many fields in argument specifier \"%#s\"", argPtr); goto err; } if (len == 2) { nameObjPtr = Jim_ListGetIndex(interp, argPtr, 0); defaultObjPtr = Jim_ListGetIndex(interp, argPtr, 1); } else { nameObjPtr = argPtr; defaultObjPtr = NULL; } if (Jim_CompareStringImmediate(interp, nameObjPtr, "args")) { if (cmdPtr->u.proc.argsPos >= 0) { |
︙ | ︙ | |||
9356 9357 9358 9359 9360 9361 9362 | if (newName[0] == 0) { return Jim_DeleteCommand(interp, oldName); } fqold = JimQualifyName(interp, oldName, &qualifiedOldNameObj); fqnew = JimQualifyName(interp, newName, &qualifiedNewNameObj); | | | | | | 9556 9557 9558 9559 9560 9561 9562 9563 9564 9565 9566 9567 9568 9569 9570 9571 9572 9573 9574 9575 9576 9577 9578 9579 9580 9581 9582 9583 9584 9585 9586 9587 9588 | if (newName[0] == 0) { return Jim_DeleteCommand(interp, oldName); } fqold = JimQualifyName(interp, oldName, &qualifiedOldNameObj); fqnew = JimQualifyName(interp, newName, &qualifiedNewNameObj); he = Jim_FindHashEntry(&interp->commands, fqold); if (he == NULL) { Jim_SetResultFormatted(interp, "can't rename \"%s\": command doesn't exist", oldName); } else if (Jim_FindHashEntry(&interp->commands, fqnew)) { Jim_SetResultFormatted(interp, "can't rename to \"%s\": command already exists", newName); } else { cmdPtr = Jim_GetHashEntryVal(he); JimIncrCmdRefCount(cmdPtr); JimUpdateProcNamespace(interp, cmdPtr, fqnew); Jim_AddHashEntry(&interp->commands, fqnew, cmdPtr); Jim_DeleteHashEntry(&interp->commands, fqold); Jim_InterpIncrProcEpoch(interp); ret = JIM_OK; } JimFreeQualifiedName(interp, qualifiedOldNameObj); JimFreeQualifiedName(interp, qualifiedNewNameObj); |
︙ | ︙ | |||
9417 9418 9419 9420 9421 9422 9423 | if (objPtr->typePtr != &commandObjType || objPtr->internalRep.cmdValue.procEpoch != interp->procEpoch #ifdef jim_ext_namespace || !Jim_StringEqObj(objPtr->internalRep.cmdValue.nsObj, interp->framePtr->nsObj) #endif ) { | | | | | | | | | | | | | | | | | | | 9617 9618 9619 9620 9621 9622 9623 9624 9625 9626 9627 9628 9629 9630 9631 9632 9633 9634 9635 9636 9637 9638 9639 9640 9641 9642 9643 9644 9645 9646 9647 9648 9649 9650 9651 9652 9653 9654 9655 9656 9657 9658 9659 9660 9661 9662 9663 9664 9665 9666 9667 9668 9669 9670 9671 9672 9673 9674 9675 9676 9677 9678 9679 9680 9681 9682 9683 9684 9685 9686 9687 9688 9689 9690 9691 9692 9693 9694 9695 9696 9697 9698 9699 9700 9701 9702 9703 9704 9705 9706 9707 9708 9709 9710 9711 9712 9713 9714 9715 9716 9717 9718 9719 9720 9721 9722 9723 9724 9725 9726 9727 9728 9729 9730 9731 9732 9733 9734 9735 9736 9737 9738 9739 9740 9741 9742 9743 9744 9745 9746 9747 9748 9749 9750 9751 9752 9753 9754 9755 9756 9757 9758 9759 9760 9761 9762 9763 9764 9765 9766 9767 9768 9769 9770 9771 9772 9773 9774 9775 9776 9777 9778 9779 9780 9781 9782 9783 9784 9785 9786 9787 9788 9789 9790 9791 9792 9793 9794 9795 9796 9797 9798 9799 9800 9801 9802 9803 9804 9805 9806 9807 9808 | if (objPtr->typePtr != &commandObjType || objPtr->internalRep.cmdValue.procEpoch != interp->procEpoch #ifdef jim_ext_namespace || !Jim_StringEqObj(objPtr->internalRep.cmdValue.nsObj, interp->framePtr->nsObj) #endif ) { const char *name = Jim_String(objPtr); Jim_HashEntry *he; if (name[0] == ':' && name[1] == ':') { while (*++name == ':') { } } #ifdef jim_ext_namespace else if (Jim_Length(interp->framePtr->nsObj)) { Jim_Obj *nameObj = Jim_DuplicateObj(interp, interp->framePtr->nsObj); Jim_AppendStrings(interp, nameObj, "::", name, NULL); he = Jim_FindHashEntry(&interp->commands, Jim_String(nameObj)); Jim_FreeNewObj(interp, nameObj); if (he) { goto found; } } #endif he = Jim_FindHashEntry(&interp->commands, name); if (he == NULL) { if (flags & JIM_ERRMSG) { Jim_SetResultFormatted(interp, "invalid command name \"%#s\"", objPtr); } return NULL; } #ifdef jim_ext_namespace found: #endif cmd = Jim_GetHashEntryVal(he); Jim_FreeIntRep(interp, objPtr); objPtr->typePtr = &commandObjType; objPtr->internalRep.cmdValue.procEpoch = interp->procEpoch; objPtr->internalRep.cmdValue.cmdPtr = cmd; objPtr->internalRep.cmdValue.nsObj = interp->framePtr->nsObj; Jim_IncrRefCount(interp->framePtr->nsObj); } else { cmd = objPtr->internalRep.cmdValue.cmdPtr; } while (cmd->u.proc.upcall) { cmd = cmd->prevCmd; } return cmd; } #define JIM_DICT_SUGAR 100 static int SetVariableFromAny(Jim_Interp *interp, struct Jim_Obj *objPtr); static const Jim_ObjType variableObjType = { "variable", NULL, NULL, NULL, JIM_TYPE_REFERENCES, }; static int JimValidName(Jim_Interp *interp, const char *type, Jim_Obj *nameObjPtr) { if (nameObjPtr->typePtr != &variableObjType) { int len; const char *str = Jim_GetString(nameObjPtr, &len); if (memchr(str, '\0', len)) { Jim_SetResultFormatted(interp, "%s name contains embedded null", type); return JIM_ERR; } } return JIM_OK; } static int SetVariableFromAny(Jim_Interp *interp, struct Jim_Obj *objPtr) { const char *varName; Jim_CallFrame *framePtr; Jim_HashEntry *he; int global; int len; if (objPtr->typePtr == &variableObjType) { framePtr = objPtr->internalRep.varValue.global ? interp->topFramePtr : interp->framePtr; if (objPtr->internalRep.varValue.callFrameId == framePtr->id) { return JIM_OK; } } else if (objPtr->typePtr == &dictSubstObjType) { return JIM_DICT_SUGAR; } else if (JimValidName(interp, "variable", objPtr) != JIM_OK) { return JIM_ERR; } varName = Jim_GetString(objPtr, &len); if (len && varName[len - 1] == ')' && strchr(varName, '(') != NULL) { return JIM_DICT_SUGAR; } if (varName[0] == ':' && varName[1] == ':') { while (*++varName == ':') { } global = 1; framePtr = interp->topFramePtr; } else { global = 0; framePtr = interp->framePtr; } he = Jim_FindHashEntry(&framePtr->vars, varName); if (he == NULL) { if (!global && framePtr->staticVars) { he = Jim_FindHashEntry(framePtr->staticVars, varName); } if (he == NULL) { return JIM_ERR; } } Jim_FreeIntRep(interp, objPtr); objPtr->typePtr = &variableObjType; objPtr->internalRep.varValue.callFrameId = framePtr->id; objPtr->internalRep.varValue.varPtr = Jim_GetHashEntryVal(he); objPtr->internalRep.varValue.global = global; return JIM_OK; } static int JimDictSugarSet(Jim_Interp *interp, Jim_Obj *ObjPtr, Jim_Obj *valObjPtr); static Jim_Obj *JimDictSugarGet(Jim_Interp *interp, Jim_Obj *ObjPtr, int flags); static Jim_Var *JimCreateVariable(Jim_Interp *interp, Jim_Obj *nameObjPtr, Jim_Obj *valObjPtr) { const char *name; Jim_CallFrame *framePtr; int global; Jim_Var *var = Jim_Alloc(sizeof(*var)); var->objPtr = valObjPtr; Jim_IncrRefCount(valObjPtr); var->linkFramePtr = NULL; name = Jim_String(nameObjPtr); if (name[0] == ':' && name[1] == ':') { while (*++name == ':') { } framePtr = interp->topFramePtr; global = 1; } else { framePtr = interp->framePtr; global = 0; } Jim_AddHashEntry(&framePtr->vars, name, var); Jim_FreeIntRep(interp, nameObjPtr); nameObjPtr->typePtr = &variableObjType; nameObjPtr->internalRep.varValue.callFrameId = framePtr->id; nameObjPtr->internalRep.varValue.varPtr = var; nameObjPtr->internalRep.varValue.global = global; return var; |
︙ | ︙ | |||
9628 9629 9630 9631 9632 9633 9634 | case JIM_OK: var = nameObjPtr->internalRep.varValue.varPtr; if (var->linkFramePtr == NULL) { Jim_IncrRefCount(valObjPtr); Jim_DecrRefCount(interp, var->objPtr); var->objPtr = valObjPtr; } | | | 9828 9829 9830 9831 9832 9833 9834 9835 9836 9837 9838 9839 9840 9841 9842 | case JIM_OK: var = nameObjPtr->internalRep.varValue.varPtr; if (var->linkFramePtr == NULL) { Jim_IncrRefCount(valObjPtr); Jim_DecrRefCount(interp, var->objPtr); var->objPtr = valObjPtr; } else { Jim_CallFrame *savedCallFrame; savedCallFrame = interp->framePtr; interp->framePtr = var->linkFramePtr; err = Jim_SetVariable(interp, var->objPtr, valObjPtr); interp->framePtr = savedCallFrame; if (err != JIM_OK) |
︙ | ︙ | |||
9689 9690 9691 9692 9693 9694 9695 | Jim_Obj *targetNameObjPtr, Jim_CallFrame *targetCallFrame) { const char *varName; const char *targetName; Jim_CallFrame *framePtr; Jim_Var *varPtr; | | | | | | | | 9889 9890 9891 9892 9893 9894 9895 9896 9897 9898 9899 9900 9901 9902 9903 9904 9905 9906 9907 9908 9909 9910 9911 9912 9913 9914 9915 9916 9917 9918 9919 9920 9921 9922 9923 9924 9925 9926 9927 9928 9929 9930 | Jim_Obj *targetNameObjPtr, Jim_CallFrame *targetCallFrame) { const char *varName; const char *targetName; Jim_CallFrame *framePtr; Jim_Var *varPtr; switch (SetVariableFromAny(interp, nameObjPtr)) { case JIM_DICT_SUGAR: Jim_SetResultFormatted(interp, "bad variable name \"%#s\": upvar won't create a scalar variable that looks like an array element", nameObjPtr); return JIM_ERR; case JIM_OK: varPtr = nameObjPtr->internalRep.varValue.varPtr; if (varPtr->linkFramePtr == NULL) { Jim_SetResultFormatted(interp, "variable \"%#s\" already exists", nameObjPtr); return JIM_ERR; } varPtr->linkFramePtr = NULL; break; } varName = Jim_String(nameObjPtr); if (varName[0] == ':' && varName[1] == ':') { while (*++varName == ':') { } framePtr = interp->topFramePtr; } else { framePtr = interp->framePtr; } targetName = Jim_String(targetNameObjPtr); |
︙ | ︙ | |||
9740 9741 9742 9743 9744 9745 9746 | Jim_SetResultFormatted(interp, "bad variable name \"%#s\": upvar won't create namespace variable that refers to procedure variable", nameObjPtr); Jim_DecrRefCount(interp, targetNameObjPtr); return JIM_ERR; } | | | | | | | | | 9940 9941 9942 9943 9944 9945 9946 9947 9948 9949 9950 9951 9952 9953 9954 9955 9956 9957 9958 9959 9960 9961 9962 9963 9964 9965 9966 9967 9968 9969 9970 9971 9972 9973 9974 9975 9976 9977 9978 9979 9980 9981 9982 9983 9984 9985 9986 9987 9988 9989 9990 9991 9992 9993 9994 9995 9996 9997 9998 9999 10000 10001 10002 10003 10004 10005 10006 10007 10008 10009 | Jim_SetResultFormatted(interp, "bad variable name \"%#s\": upvar won't create namespace variable that refers to procedure variable", nameObjPtr); Jim_DecrRefCount(interp, targetNameObjPtr); return JIM_ERR; } if (framePtr == targetCallFrame) { Jim_Obj *objPtr = targetNameObjPtr; while (1) { if (strcmp(Jim_String(objPtr), varName) == 0) { Jim_SetResultString(interp, "can't upvar from variable to itself", -1); Jim_DecrRefCount(interp, targetNameObjPtr); return JIM_ERR; } if (SetVariableFromAny(interp, objPtr) != JIM_OK) break; varPtr = objPtr->internalRep.varValue.varPtr; if (varPtr->linkFramePtr != targetCallFrame) break; objPtr = varPtr->objPtr; } } Jim_SetVariable(interp, nameObjPtr, targetNameObjPtr); nameObjPtr->internalRep.varValue.varPtr->linkFramePtr = targetCallFrame; Jim_DecrRefCount(interp, targetNameObjPtr); return JIM_OK; } Jim_Obj *Jim_GetVariable(Jim_Interp *interp, Jim_Obj *nameObjPtr, int flags) { switch (SetVariableFromAny(interp, nameObjPtr)) { case JIM_OK:{ Jim_Var *varPtr = nameObjPtr->internalRep.varValue.varPtr; if (varPtr->linkFramePtr == NULL) { return varPtr->objPtr; } else { Jim_Obj *objPtr; Jim_CallFrame *savedCallFrame = interp->framePtr; interp->framePtr = varPtr->linkFramePtr; objPtr = Jim_GetVariable(interp, varPtr->objPtr, flags); interp->framePtr = savedCallFrame; if (objPtr) { return objPtr; } } } break; case JIM_DICT_SUGAR: return JimDictSugarGet(interp, nameObjPtr, flags); } if (flags & JIM_ERRMSG) { Jim_SetResultFormatted(interp, "can't read \"%#s\": no such variable", nameObjPtr); } return NULL; } |
︙ | ︙ | |||
9849 9850 9851 9852 9853 9854 9855 | { Jim_Var *varPtr; int retval; Jim_CallFrame *framePtr; retval = SetVariableFromAny(interp, nameObjPtr); if (retval == JIM_DICT_SUGAR) { | | | | | 10049 10050 10051 10052 10053 10054 10055 10056 10057 10058 10059 10060 10061 10062 10063 10064 10065 10066 10067 10068 10069 10070 10071 10072 10073 10074 10075 10076 10077 10078 10079 10080 10081 10082 10083 10084 10085 10086 10087 10088 | { Jim_Var *varPtr; int retval; Jim_CallFrame *framePtr; retval = SetVariableFromAny(interp, nameObjPtr); if (retval == JIM_DICT_SUGAR) { return JimDictSugarSet(interp, nameObjPtr, NULL); } else if (retval == JIM_OK) { varPtr = nameObjPtr->internalRep.varValue.varPtr; if (varPtr->linkFramePtr) { framePtr = interp->framePtr; interp->framePtr = varPtr->linkFramePtr; retval = Jim_UnsetVariable(interp, varPtr->objPtr, JIM_NONE); interp->framePtr = framePtr; } else { const char *name = Jim_String(nameObjPtr); if (nameObjPtr->internalRep.varValue.global) { name += 2; framePtr = interp->topFramePtr; } else { framePtr = interp->framePtr; } retval = Jim_DeleteHashEntry(&framePtr->vars, name); if (retval == JIM_OK) { framePtr->id = interp->callFrameEpoch++; } } } if (retval != JIM_OK && (flags & JIM_ERRMSG)) { Jim_SetResultFormatted(interp, "can't unset \"%#s\": no such variable", nameObjPtr); } |
︙ | ︙ | |||
9907 9908 9909 9910 9911 9912 9913 | p++; keyLen = (str + len) - p; if (str[len - 1] == ')') { keyLen--; } | | | | | | 10107 10108 10109 10110 10111 10112 10113 10114 10115 10116 10117 10118 10119 10120 10121 10122 10123 10124 10125 10126 10127 10128 10129 10130 10131 10132 10133 10134 10135 10136 10137 10138 10139 10140 10141 10142 10143 10144 10145 10146 10147 10148 10149 10150 10151 10152 | p++; keyLen = (str + len) - p; if (str[len - 1] == ')') { keyLen--; } keyObjPtr = Jim_NewStringObj(interp, p, keyLen); Jim_IncrRefCount(varObjPtr); Jim_IncrRefCount(keyObjPtr); *varPtrPtr = varObjPtr; *keyPtrPtr = keyObjPtr; } static int JimDictSugarSet(Jim_Interp *interp, Jim_Obj *objPtr, Jim_Obj *valObjPtr) { int err; SetDictSubstFromAny(interp, objPtr); err = Jim_SetDictKeysVector(interp, objPtr->internalRep.dictSubstValue.varNameObjPtr, &objPtr->internalRep.dictSubstValue.indexObjPtr, 1, valObjPtr, JIM_MUSTEXIST); if (err == JIM_OK) { Jim_SetEmptyResult(interp); } else { if (!valObjPtr) { if (Jim_GetVariable(interp, objPtr->internalRep.dictSubstValue.varNameObjPtr, JIM_NONE)) { Jim_SetResultFormatted(interp, "can't unset \"%#s\": no such element in array", objPtr); return err; } } Jim_SetResultFormatted(interp, "can't %s \"%#s\": variable isn't array", (valObjPtr ? "set" : "unset"), objPtr); } return err; } static Jim_Obj *JimDictExpandArrayVariable(Jim_Interp *interp, Jim_Obj *varObjPtr, |
︙ | ︙ | |||
9964 9965 9966 9967 9968 9969 9970 | ret = Jim_DictKey(interp, dictObjPtr, keyObjPtr, &resObjPtr, JIM_NONE); if (ret != JIM_OK) { Jim_SetResultFormatted(interp, "can't read \"%#s(%#s)\": %s array", varObjPtr, keyObjPtr, ret < 0 ? "variable isn't" : "no such element in"); } else if ((flags & JIM_UNSHARED) && Jim_IsShared(dictObjPtr)) { | | | 10164 10165 10166 10167 10168 10169 10170 10171 10172 10173 10174 10175 10176 10177 10178 | ret = Jim_DictKey(interp, dictObjPtr, keyObjPtr, &resObjPtr, JIM_NONE); if (ret != JIM_OK) { Jim_SetResultFormatted(interp, "can't read \"%#s(%#s)\": %s array", varObjPtr, keyObjPtr, ret < 0 ? "variable isn't" : "no such element in"); } else if ((flags & JIM_UNSHARED) && Jim_IsShared(dictObjPtr)) { Jim_SetVariable(interp, varObjPtr, Jim_DuplicateObj(interp, dictObjPtr)); } return resObjPtr; } |
︙ | ︙ | |||
10006 10007 10008 10009 10010 10011 10012 | static void SetDictSubstFromAny(Jim_Interp *interp, Jim_Obj *objPtr) { if (objPtr->typePtr != &dictSubstObjType) { Jim_Obj *varObjPtr, *keyObjPtr; if (objPtr->typePtr == &interpolatedObjType) { | | | 10206 10207 10208 10209 10210 10211 10212 10213 10214 10215 10216 10217 10218 10219 10220 | static void SetDictSubstFromAny(Jim_Interp *interp, Jim_Obj *objPtr) { if (objPtr->typePtr != &dictSubstObjType) { Jim_Obj *varObjPtr, *keyObjPtr; if (objPtr->typePtr == &interpolatedObjType) { varObjPtr = objPtr->internalRep.dictSubstValue.varNameObjPtr; keyObjPtr = objPtr->internalRep.dictSubstValue.indexObjPtr; Jim_IncrRefCount(varObjPtr); Jim_IncrRefCount(keyObjPtr); } |
︙ | ︙ | |||
10051 10052 10053 10054 10055 10056 10057 | } static Jim_Obj *JimExpandExprSugar(Jim_Interp *interp, Jim_Obj *objPtr) { Jim_Obj *resultObjPtr; if (Jim_EvalExpression(interp, objPtr, &resultObjPtr) == JIM_OK) { | | | 10251 10252 10253 10254 10255 10256 10257 10258 10259 10260 10261 10262 10263 10264 10265 | } static Jim_Obj *JimExpandExprSugar(Jim_Interp *interp, Jim_Obj *objPtr) { Jim_Obj *resultObjPtr; if (Jim_EvalExpression(interp, objPtr, &resultObjPtr) == JIM_OK) { resultObjPtr->refCount--; return resultObjPtr; } return NULL; } |
︙ | ︙ | |||
10074 10075 10076 10077 10078 10079 10080 | cf->argv = NULL; cf->argc = 0; cf->procArgsObjPtr = NULL; cf->procBodyObjPtr = NULL; cf->next = NULL; cf->staticVars = NULL; cf->localCommands = NULL; | < | 10274 10275 10276 10277 10278 10279 10280 10281 10282 10283 10284 10285 10286 10287 | cf->argv = NULL; cf->argc = 0; cf->procArgsObjPtr = NULL; cf->procBodyObjPtr = NULL; cf->next = NULL; cf->staticVars = NULL; cf->localCommands = NULL; cf->tailcallObj = NULL; cf->tailcallCmd = NULL; } else { cf = Jim_Alloc(sizeof(*cf)); memset(cf, 0, sizeof(*cf)); |
︙ | ︙ | |||
10096 10097 10098 10099 10100 10101 10102 | Jim_IncrRefCount(nsObj); return cf; } static int JimDeleteLocalProcs(Jim_Interp *interp, Jim_Stack *localCommands) { | | | | < > | | | 10295 10296 10297 10298 10299 10300 10301 10302 10303 10304 10305 10306 10307 10308 10309 10310 10311 10312 10313 10314 10315 10316 10317 10318 10319 10320 10321 10322 10323 10324 10325 10326 10327 10328 10329 10330 10331 10332 10333 10334 10335 10336 10337 10338 10339 10340 10341 10342 10343 10344 10345 10346 10347 10348 10349 10350 | Jim_IncrRefCount(nsObj); return cf; } static int JimDeleteLocalProcs(Jim_Interp *interp, Jim_Stack *localCommands) { if (localCommands) { Jim_Obj *cmdNameObj; while ((cmdNameObj = Jim_StackPop(localCommands)) != NULL) { Jim_HashEntry *he; Jim_Obj *fqObjName; Jim_HashTable *ht = &interp->commands; const char *fqname = JimQualifyName(interp, Jim_String(cmdNameObj), &fqObjName); he = Jim_FindHashEntry(ht, fqname); if (he) { Jim_Cmd *cmd = Jim_GetHashEntryVal(he); if (cmd->prevCmd) { Jim_Cmd *prevCmd = cmd->prevCmd; cmd->prevCmd = NULL; JimDecrCmdRefCount(interp, cmd); Jim_SetHashVal(ht, he, prevCmd); } else { Jim_DeleteHashEntry(ht, fqname); } Jim_InterpIncrProcEpoch(interp); } Jim_DecrRefCount(interp, cmdNameObj); JimFreeQualifiedName(interp, fqObjName); } Jim_FreeStack(localCommands); Jim_Free(localCommands); } return JIM_OK; } #define JIM_FCF_FULL 0 #define JIM_FCF_REUSE 1 static void JimFreeCallFrame(Jim_Interp *interp, Jim_CallFrame *cf, int action) { JimDeleteLocalProcs(interp, cf->localCommands); if (cf->procArgsObjPtr) Jim_DecrRefCount(interp, cf->procArgsObjPtr); if (cf->procBodyObjPtr) |
︙ | ︙ | |||
10174 10175 10176 10177 10178 10179 10180 | cf->vars.used = 0; } cf->next = interp->freeFramesList; interp->freeFramesList = cf; } | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | 10373 10374 10375 10376 10377 10378 10379 10380 10381 10382 10383 10384 10385 10386 | cf->vars.used = 0; } cf->next = interp->freeFramesList; interp->freeFramesList = cf; } int Jim_IsBigEndian(void) { union { unsigned short s; unsigned char c[2]; } uval = {0x0102}; |
︙ | ︙ | |||
10477 10478 10479 10480 10481 10482 10483 | Jim_IncrRefCount(i->unknown); Jim_IncrRefCount(i->currentScriptObj); Jim_IncrRefCount(i->nullScriptObj); Jim_IncrRefCount(i->errorProc); Jim_IncrRefCount(i->trueObj); Jim_IncrRefCount(i->falseObj); | | > | | 10423 10424 10425 10426 10427 10428 10429 10430 10431 10432 10433 10434 10435 10436 10437 10438 10439 10440 10441 10442 10443 10444 10445 10446 10447 10448 10449 10450 10451 10452 10453 10454 10455 10456 10457 10458 10459 | Jim_IncrRefCount(i->unknown); Jim_IncrRefCount(i->currentScriptObj); Jim_IncrRefCount(i->nullScriptObj); Jim_IncrRefCount(i->errorProc); Jim_IncrRefCount(i->trueObj); Jim_IncrRefCount(i->falseObj); Jim_SetVariableStrWithStr(i, JIM_LIBPATH, TCL_LIBRARY); Jim_SetVariableStrWithStr(i, JIM_INTERACTIVE, "0"); Jim_SetVariableStrWithStr(i, "tcl_platform(engine)", "Jim"); Jim_SetVariableStrWithStr(i, "tcl_platform(os)", TCL_PLATFORM_OS); Jim_SetVariableStrWithStr(i, "tcl_platform(platform)", TCL_PLATFORM_PLATFORM); Jim_SetVariableStrWithStr(i, "tcl_platform(pathSeparator)", TCL_PLATFORM_PATH_SEPARATOR); Jim_SetVariableStrWithStr(i, "tcl_platform(byteOrder)", Jim_IsBigEndian() ? "bigEndian" : "littleEndian"); Jim_SetVariableStrWithStr(i, "tcl_platform(threaded)", "0"); Jim_SetVariableStr(i, "tcl_platform(pointerSize)", Jim_NewIntObj(i, sizeof(void *))); Jim_SetVariableStr(i, "tcl_platform(wordSize)", Jim_NewIntObj(i, sizeof(jim_wide))); return i; } void Jim_FreeInterp(Jim_Interp *i) { Jim_CallFrame *cf, *cfx; Jim_Obj *objPtr, *nextObjPtr; for (cf = i->framePtr; cf; cf = cfx) { cfx = cf->parent; JimFreeCallFrame(i, cf, JIM_FCF_FULL); } Jim_DecrRefCount(i, i->emptyObj); Jim_DecrRefCount(i, i->trueObj); |
︙ | ︙ | |||
10551 10552 10553 10554 10555 10556 10557 | objPtr = objPtr->nextObjPtr; } printf("-------------------------------------\n\n"); JimPanic((1, "Live list non empty freeing the interpreter! Leak?")); } #endif | | | | | 10498 10499 10500 10501 10502 10503 10504 10505 10506 10507 10508 10509 10510 10511 10512 10513 10514 10515 10516 10517 10518 10519 10520 10521 10522 10523 10524 10525 10526 10527 10528 | objPtr = objPtr->nextObjPtr; } printf("-------------------------------------\n\n"); JimPanic((1, "Live list non empty freeing the interpreter! Leak?")); } #endif objPtr = i->freeList; while (objPtr) { nextObjPtr = objPtr->nextObjPtr; Jim_Free(objPtr); objPtr = nextObjPtr; } for (cf = i->freeFramesList; cf; cf = cfx) { cfx = cf->next; if (cf->vars.table) Jim_FreeHashTable(&cf->vars); Jim_Free(cf); } Jim_Free(i); } Jim_CallFrame *Jim_GetCallFrameByLevel(Jim_Interp *interp, Jim_Obj *levelObjPtr) { long level; const char *str; |
︙ | ︙ | |||
10592 10593 10594 10595 10596 10597 10598 | } } else { if (Jim_GetLong(interp, levelObjPtr, &level) != JIM_OK || level < 0) { level = -1; } else { | | | | | | | 10539 10540 10541 10542 10543 10544 10545 10546 10547 10548 10549 10550 10551 10552 10553 10554 10555 10556 10557 10558 10559 10560 10561 10562 10563 10564 10565 10566 10567 10568 10569 10570 10571 10572 10573 10574 10575 10576 10577 10578 10579 10580 10581 10582 10583 10584 10585 10586 10587 10588 10589 10590 10591 10592 10593 10594 | } } else { if (Jim_GetLong(interp, levelObjPtr, &level) != JIM_OK || level < 0) { level = -1; } else { level = interp->framePtr->level - level; } } } else { str = "1"; level = interp->framePtr->level - 1; } if (level == 0) { return interp->topFramePtr; } if (level > 0) { for (framePtr = interp->framePtr; framePtr; framePtr = framePtr->parent) { if (framePtr->level == level) { return framePtr; } } } Jim_SetResultFormatted(interp, "bad level \"%s\"", str); return NULL; } static Jim_CallFrame *JimGetCallFrameByInteger(Jim_Interp *interp, Jim_Obj *levelObjPtr) { long level; Jim_CallFrame *framePtr; if (Jim_GetLong(interp, levelObjPtr, &level) == JIM_OK) { if (level <= 0) { level = interp->framePtr->level + level; } if (level == 0) { return interp->topFramePtr; } for (framePtr = interp->framePtr; framePtr; framePtr = framePtr->parent) { if (framePtr->level == level) { return framePtr; } } } |
︙ | ︙ | |||
10656 10657 10658 10659 10660 10661 10662 | Jim_IncrRefCount(interp->stackTrace); } static void JimSetStackTrace(Jim_Interp *interp, Jim_Obj *stackTraceObj) { int len; | | | | | | | | 10603 10604 10605 10606 10607 10608 10609 10610 10611 10612 10613 10614 10615 10616 10617 10618 10619 10620 10621 10622 10623 10624 10625 10626 10627 10628 10629 10630 10631 10632 10633 10634 10635 10636 10637 10638 10639 10640 10641 10642 10643 10644 10645 10646 10647 10648 10649 10650 10651 10652 10653 10654 10655 10656 10657 10658 10659 | Jim_IncrRefCount(interp->stackTrace); } static void JimSetStackTrace(Jim_Interp *interp, Jim_Obj *stackTraceObj) { int len; Jim_IncrRefCount(stackTraceObj); Jim_DecrRefCount(interp, interp->stackTrace); interp->stackTrace = stackTraceObj; interp->errorFlag = 1; len = Jim_ListLength(interp, interp->stackTrace); if (len >= 3) { if (Jim_Length(Jim_ListGetIndex(interp, interp->stackTrace, len - 2)) == 0) { interp->addStackTrace = 1; } } } static void JimAppendStackTrace(Jim_Interp *interp, const char *procname, Jim_Obj *fileNameObj, int linenr) { if (strcmp(procname, "unknown") == 0) { procname = ""; } if (!*procname && !Jim_Length(fileNameObj)) { return; } if (Jim_IsShared(interp->stackTrace)) { Jim_DecrRefCount(interp, interp->stackTrace); interp->stackTrace = Jim_DuplicateObj(interp, interp->stackTrace); Jim_IncrRefCount(interp->stackTrace); } if (!*procname && Jim_Length(fileNameObj)) { int len = Jim_ListLength(interp, interp->stackTrace); if (len >= 3) { Jim_Obj *objPtr = Jim_ListGetIndex(interp, interp->stackTrace, len - 3); if (Jim_Length(objPtr)) { objPtr = Jim_ListGetIndex(interp, interp->stackTrace, len - 2); if (Jim_Length(objPtr) == 0) { ListSetIndex(interp, interp->stackTrace, len - 2, fileNameObj, 0); ListSetIndex(interp, interp->stackTrace, len - 1, Jim_NewIntObj(interp, linenr), 0); return; } } } } |
︙ | ︙ | |||
10804 10805 10806 10807 10808 10809 10810 | static int SetIntFromAny(Jim_Interp *interp, Jim_Obj *objPtr, int flags) { jim_wide wideValue; const char *str; if (objPtr->typePtr == &coercedDoubleObjType) { | | | | | | 10751 10752 10753 10754 10755 10756 10757 10758 10759 10760 10761 10762 10763 10764 10765 10766 10767 10768 10769 10770 10771 10772 10773 10774 10775 10776 10777 10778 10779 10780 10781 10782 10783 | static int SetIntFromAny(Jim_Interp *interp, Jim_Obj *objPtr, int flags) { jim_wide wideValue; const char *str; if (objPtr->typePtr == &coercedDoubleObjType) { objPtr->typePtr = &intObjType; return JIM_OK; } str = Jim_String(objPtr); if (Jim_StringToWide(str, &wideValue, 0) != JIM_OK) { if (flags & JIM_ERRMSG) { Jim_SetResultFormatted(interp, "expected integer but got \"%#s\"", objPtr); } return JIM_ERR; } if ((wideValue == JIM_WIDE_MIN || wideValue == JIM_WIDE_MAX) && errno == ERANGE) { Jim_SetResultString(interp, "Integer value too big to be represented", -1); return JIM_ERR; } Jim_FreeIntRep(interp, objPtr); objPtr->typePtr = &intObjType; objPtr->internalRep.wideValue = wideValue; return JIM_OK; } #ifdef JIM_OPTIMIZATION |
︙ | ︙ | |||
10921 10922 10923 10924 10925 10926 10927 | return; } { char buf[JIM_DOUBLE_SPACE + 1]; int i; int len = sprintf(buf, "%.12g", value); | | | | 10868 10869 10870 10871 10872 10873 10874 10875 10876 10877 10878 10879 10880 10881 10882 10883 10884 10885 10886 10887 10888 | return; } { char buf[JIM_DOUBLE_SPACE + 1]; int i; int len = sprintf(buf, "%.12g", value); for (i = 0; i < len; i++) { if (buf[i] == '.' || buf[i] == 'e') { #if defined(JIM_SPRINTF_DOUBLE_NEEDS_FIX) char *e = strchr(buf, 'e'); if (e && (e[1] == '-' || e[1] == '+') && e[2] == '0') { e += 2; memmove(e, e + 1, len - (e - buf)); } #endif break; } } |
︙ | ︙ | |||
10953 10954 10955 10956 10957 10958 10959 | double doubleValue; jim_wide wideValue; const char *str; str = Jim_String(objPtr); #ifdef HAVE_LONG_LONG | | | | | | | 10900 10901 10902 10903 10904 10905 10906 10907 10908 10909 10910 10911 10912 10913 10914 10915 10916 10917 10918 10919 10920 10921 10922 10923 10924 10925 10926 10927 10928 10929 10930 10931 10932 10933 10934 10935 10936 10937 10938 10939 10940 10941 | double doubleValue; jim_wide wideValue; const char *str; str = Jim_String(objPtr); #ifdef HAVE_LONG_LONG #define MIN_INT_IN_DOUBLE -(1LL << 53) #define MAX_INT_IN_DOUBLE -(MIN_INT_IN_DOUBLE + 1) if (objPtr->typePtr == &intObjType && JimWideValue(objPtr) >= MIN_INT_IN_DOUBLE && JimWideValue(objPtr) <= MAX_INT_IN_DOUBLE) { objPtr->typePtr = &coercedDoubleObjType; return JIM_OK; } else #endif if (Jim_StringToWide(str, &wideValue, 10) == JIM_OK) { Jim_FreeIntRep(interp, objPtr); objPtr->typePtr = &coercedDoubleObjType; objPtr->internalRep.wideValue = wideValue; return JIM_OK; } else { if (Jim_StringToDouble(str, &doubleValue) != JIM_OK) { Jim_SetResultFormatted(interp, "expected floating-point number but got \"%#s\"", objPtr); return JIM_ERR; } Jim_FreeIntRep(interp, objPtr); } objPtr->typePtr = &doubleObjType; objPtr->internalRep.doubleValue = doubleValue; return JIM_OK; } |
︙ | ︙ | |||
11016 11017 11018 11019 11020 11021 11022 11023 11024 11025 11026 11027 11028 11029 | objPtr = Jim_NewObj(interp); objPtr->typePtr = &doubleObjType; objPtr->bytes = NULL; objPtr->internalRep.doubleValue = doubleValue; return objPtr; } static void ListInsertElements(Jim_Obj *listPtr, int idx, int elemc, Jim_Obj *const *elemVec); static void ListAppendElement(Jim_Obj *listPtr, Jim_Obj *objPtr); static void FreeListInternalRep(Jim_Interp *interp, Jim_Obj *objPtr); static void DupListInternalRep(Jim_Interp *interp, Jim_Obj *srcPtr, Jim_Obj *dupPtr); static void UpdateStringOfList(struct Jim_Obj *objPtr); static int SetListFromAny(Jim_Interp *interp, struct Jim_Obj *objPtr); | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 10963 10964 10965 10966 10967 10968 10969 10970 10971 10972 10973 10974 10975 10976 10977 10978 10979 10980 10981 10982 10983 10984 10985 10986 10987 10988 10989 10990 10991 10992 10993 10994 10995 10996 10997 10998 10999 11000 11001 11002 11003 11004 11005 11006 11007 11008 11009 11010 11011 11012 11013 11014 11015 11016 | objPtr = Jim_NewObj(interp); objPtr->typePtr = &doubleObjType; objPtr->bytes = NULL; objPtr->internalRep.doubleValue = doubleValue; return objPtr; } static int SetBooleanFromAny(Jim_Interp *interp, Jim_Obj *objPtr, int flags); int Jim_GetBoolean(Jim_Interp *interp, Jim_Obj *objPtr, int * booleanPtr) { if (objPtr->typePtr != &intObjType && SetBooleanFromAny(interp, objPtr, JIM_ERRMSG) == JIM_ERR) return JIM_ERR; *booleanPtr = (int) JimWideValue(objPtr); return JIM_OK; } static int SetBooleanFromAny(Jim_Interp *interp, Jim_Obj *objPtr, int flags) { static const char * const falses[] = { "0", "false", "no", "off", NULL }; static const char * const trues[] = { "1", "true", "yes", "on", NULL }; int boolean; int index; if (Jim_GetEnum(interp, objPtr, falses, &index, NULL, 0) == JIM_OK) { boolean = 0; } else if (Jim_GetEnum(interp, objPtr, trues, &index, NULL, 0) == JIM_OK) { boolean = 1; } else { if (flags & JIM_ERRMSG) { Jim_SetResultFormatted(interp, "expected boolean but got \"%#s\"", objPtr); } return JIM_ERR; } Jim_FreeIntRep(interp, objPtr); objPtr->typePtr = &intObjType; objPtr->internalRep.wideValue = boolean; return JIM_OK; } static void ListInsertElements(Jim_Obj *listPtr, int idx, int elemc, Jim_Obj *const *elemVec); static void ListAppendElement(Jim_Obj *listPtr, Jim_Obj *objPtr); static void FreeListInternalRep(Jim_Interp *interp, Jim_Obj *objPtr); static void DupListInternalRep(Jim_Interp *interp, Jim_Obj *srcPtr, Jim_Obj *dupPtr); static void UpdateStringOfList(struct Jim_Obj *objPtr); static int SetListFromAny(Jim_Interp *interp, struct Jim_Obj *objPtr); |
︙ | ︙ | |||
11067 11068 11069 11070 11071 11072 11073 | #define JIM_ELESTR_SIMPLE 0 #define JIM_ELESTR_BRACE 1 #define JIM_ELESTR_QUOTE 2 static unsigned char ListElementQuotingType(const char *s, int len) { int i, level, blevel, trySimple = 1; | | | 11054 11055 11056 11057 11058 11059 11060 11061 11062 11063 11064 11065 11066 11067 11068 | #define JIM_ELESTR_SIMPLE 0 #define JIM_ELESTR_BRACE 1 #define JIM_ELESTR_QUOTE 2 static unsigned char ListElementQuotingType(const char *s, int len) { int i, level, blevel, trySimple = 1; if (len == 0) return JIM_ELESTR_BRACE; if (s[0] == '"' || s[0] == '{') { trySimple = 0; goto testbrace; } for (i = 0; i < len; i++) { |
︙ | ︙ | |||
11089 11090 11091 11092 11093 11094 11095 11096 11097 11098 11099 11100 11101 11102 11103 | case '\\': case '\r': case '\n': case '\t': case '\f': case '\v': trySimple = 0; case '{': case '}': goto testbrace; } } return JIM_ELESTR_SIMPLE; testbrace: | > | | 11076 11077 11078 11079 11080 11081 11082 11083 11084 11085 11086 11087 11088 11089 11090 11091 11092 11093 11094 11095 11096 11097 11098 11099 | case '\\': case '\r': case '\n': case '\t': case '\f': case '\v': trySimple = 0; case '{': case '}': goto testbrace; } } return JIM_ELESTR_SIMPLE; testbrace: if (s[len - 1] == '\\') return JIM_ELESTR_QUOTE; level = 0; blevel = 0; for (i = 0; i < len; i++) { switch (s[i]) { case '{': |
︙ | ︙ | |||
11217 11218 11219 11220 11221 11222 11223 | { #define STATIC_QUOTING_LEN 32 int i, bufLen, realLength; const char *strRep; char *p; unsigned char *quotingType, staticQuoting[STATIC_QUOTING_LEN]; | | | | | | | 11205 11206 11207 11208 11209 11210 11211 11212 11213 11214 11215 11216 11217 11218 11219 11220 11221 11222 11223 11224 11225 11226 11227 11228 11229 11230 11231 11232 11233 11234 11235 11236 11237 11238 11239 11240 11241 11242 11243 11244 11245 11246 11247 11248 11249 11250 11251 11252 | { #define STATIC_QUOTING_LEN 32 int i, bufLen, realLength; const char *strRep; char *p; unsigned char *quotingType, staticQuoting[STATIC_QUOTING_LEN]; if (objc > STATIC_QUOTING_LEN) { quotingType = Jim_Alloc(objc); } else { quotingType = staticQuoting; } bufLen = 0; for (i = 0; i < objc; i++) { int len; strRep = Jim_GetString(objv[i], &len); quotingType[i] = ListElementQuotingType(strRep, len); switch (quotingType[i]) { case JIM_ELESTR_SIMPLE: if (i != 0 || strRep[0] != '#') { bufLen += len; break; } quotingType[i] = JIM_ELESTR_BRACE; case JIM_ELESTR_BRACE: bufLen += len + 2; break; case JIM_ELESTR_QUOTE: bufLen += len * 2; break; } bufLen++; } bufLen++; p = objPtr->bytes = Jim_Alloc(bufLen + 1); realLength = 0; for (i = 0; i < objc; i++) { int len, qlen; strRep = Jim_GetString(objv[i], &len); |
︙ | ︙ | |||
11281 11282 11283 11284 11285 11286 11287 | realLength++; } qlen = BackslashQuoteString(strRep, len, p); p += qlen; realLength += qlen; break; } | | | | 11269 11270 11271 11272 11273 11274 11275 11276 11277 11278 11279 11280 11281 11282 11283 11284 11285 11286 11287 11288 11289 | realLength++; } qlen = BackslashQuoteString(strRep, len, p); p += qlen; realLength += qlen; break; } if (i + 1 != objc) { *p++ = ' '; realLength++; } } *p = '\0'; objPtr->length = realLength; if (quotingType != staticQuoting) { Jim_Free(quotingType); } } |
︙ | ︙ | |||
11322 11323 11324 11325 11326 11327 11328 | int i; listObjPtrPtr = JimDictPairs(objPtr, &len); for (i = 0; i < len; i++) { Jim_IncrRefCount(listObjPtrPtr[i]); } | | | | | | 11310 11311 11312 11313 11314 11315 11316 11317 11318 11319 11320 11321 11322 11323 11324 11325 11326 11327 11328 11329 11330 11331 11332 11333 11334 11335 11336 11337 11338 11339 11340 11341 11342 11343 11344 11345 11346 11347 11348 11349 11350 11351 11352 11353 11354 | int i; listObjPtrPtr = JimDictPairs(objPtr, &len); for (i = 0; i < len; i++) { Jim_IncrRefCount(listObjPtrPtr[i]); } Jim_FreeIntRep(interp, objPtr); objPtr->typePtr = &listObjType; objPtr->internalRep.listValue.len = len; objPtr->internalRep.listValue.maxLen = len; objPtr->internalRep.listValue.ele = listObjPtrPtr; return JIM_OK; } if (objPtr->typePtr == &sourceObjType) { fileNameObj = objPtr->internalRep.sourceValue.fileNameObj; linenr = objPtr->internalRep.sourceValue.lineNumber; } else { fileNameObj = interp->emptyObj; linenr = 1; } Jim_IncrRefCount(fileNameObj); str = Jim_GetString(objPtr, &strLen); Jim_FreeIntRep(interp, objPtr); objPtr->typePtr = &listObjType; objPtr->internalRep.listValue.len = 0; objPtr->internalRep.listValue.maxLen = 0; objPtr->internalRep.listValue.ele = NULL; if (strLen) { JimParserInit(&parser, str, strLen, linenr); while (!parser.eof) { Jim_Obj *elementPtr; JimParseList(&parser); if (parser.tt != JIM_TT_STR && parser.tt != JIM_TT_ESC) |
︙ | ︙ | |||
11486 11487 11488 11489 11490 11491 11492 | static int ListSortCommand(Jim_Obj **lhsObj, Jim_Obj **rhsObj) { Jim_Obj *compare_script; int rc; jim_wide ret = 0; | | | 11474 11475 11476 11477 11478 11479 11480 11481 11482 11483 11484 11485 11486 11487 11488 | static int ListSortCommand(Jim_Obj **lhsObj, Jim_Obj **rhsObj) { Jim_Obj *compare_script; int rc; jim_wide ret = 0; compare_script = Jim_DuplicateObj(sort_info->interp, sort_info->command); Jim_ListAppendElement(sort_info->interp, compare_script, *lhsObj); Jim_ListAppendElement(sort_info->interp, compare_script, *rhsObj); rc = Jim_EvalObj(sort_info->interp, compare_script); if (rc != JIM_OK || Jim_GetWide(sort_info->interp, Jim_GetResult(sort_info->interp), &ret) != JIM_OK) { |
︙ | ︙ | |||
11508 11509 11510 11511 11512 11513 11514 | { int src; int dst = 0; Jim_Obj **ele = listObjPtr->internalRep.listValue.ele; for (src = 1; src < listObjPtr->internalRep.listValue.len; src++) { if (comp(&ele[dst], &ele[src]) == 0) { | | | | | | | 11496 11497 11498 11499 11500 11501 11502 11503 11504 11505 11506 11507 11508 11509 11510 11511 11512 11513 11514 11515 11516 11517 11518 11519 11520 11521 11522 11523 11524 11525 11526 11527 11528 11529 11530 11531 11532 11533 11534 11535 11536 11537 11538 11539 11540 | { int src; int dst = 0; Jim_Obj **ele = listObjPtr->internalRep.listValue.ele; for (src = 1; src < listObjPtr->internalRep.listValue.len; src++) { if (comp(&ele[dst], &ele[src]) == 0) { Jim_DecrRefCount(sort_info->interp, ele[dst]); } else { dst++; } ele[dst] = ele[src]; } ele[++dst] = ele[src]; listObjPtr->internalRep.listValue.len = dst; } static int ListSortElements(Jim_Interp *interp, Jim_Obj *listObjPtr, struct lsort_info *info) { struct lsort_info *prev_info; typedef int (qsort_comparator) (const void *, const void *); int (*fn) (Jim_Obj **, Jim_Obj **); Jim_Obj **vector; int len; int rc; JimPanic((Jim_IsShared(listObjPtr), "ListSortElements called with shared object")); SetListFromAny(interp, listObjPtr); prev_info = sort_info; sort_info = info; vector = listObjPtr->internalRep.listValue.ele; len = listObjPtr->internalRep.listValue.len; switch (info->type) { case JIM_LSORT_ASCII: |
︙ | ︙ | |||
11561 11562 11563 11564 11565 11566 11567 | case JIM_LSORT_REAL: fn = ListSortReal; break; case JIM_LSORT_COMMAND: fn = ListSortCommand; break; default: | | > | | 11549 11550 11551 11552 11553 11554 11555 11556 11557 11558 11559 11560 11561 11562 11563 11564 11565 11566 11567 11568 11569 | case JIM_LSORT_REAL: fn = ListSortReal; break; case JIM_LSORT_COMMAND: fn = ListSortCommand; break; default: fn = NULL; JimPanic((1, "ListSort called with invalid sort type")); return -1; } if (info->indexed) { info->subfn = fn; fn = ListSortIndexHelper; } if ((rc = setjmp(info->jmpbuf)) == 0) { qsort(vector, len, sizeof(Jim_Obj *), (qsort_comparator *) fn); |
︙ | ︙ | |||
11594 11595 11596 11597 11598 11599 11600 | int currentLen = listPtr->internalRep.listValue.len; int requiredLen = currentLen + elemc; int i; Jim_Obj **point; if (requiredLen > listPtr->internalRep.listValue.maxLen) { if (requiredLen < 2) { | | | 11583 11584 11585 11586 11587 11588 11589 11590 11591 11592 11593 11594 11595 11596 11597 | int currentLen = listPtr->internalRep.listValue.len; int requiredLen = currentLen + elemc; int i; Jim_Obj **point; if (requiredLen > listPtr->internalRep.listValue.maxLen) { if (requiredLen < 2) { requiredLen = 4; } else { requiredLen *= 2; } listPtr->internalRep.listValue.ele = Jim_Realloc(listPtr->internalRep.listValue.ele, |
︙ | ︙ | |||
11780 11781 11782 11783 11784 11785 11786 | Jim_Obj *objPtr = Jim_NewListObj(interp, NULL, 0); for (i = 0; i < objc; i++) ListAppendList(objPtr, objv[i]); return objPtr; } else { | | | | | | | | 11769 11770 11771 11772 11773 11774 11775 11776 11777 11778 11779 11780 11781 11782 11783 11784 11785 11786 11787 11788 11789 11790 11791 11792 11793 11794 11795 11796 11797 11798 11799 11800 11801 11802 11803 11804 11805 11806 | Jim_Obj *objPtr = Jim_NewListObj(interp, NULL, 0); for (i = 0; i < objc; i++) ListAppendList(objPtr, objv[i]); return objPtr; } else { int len = 0, objLen; char *bytes, *p; for (i = 0; i < objc; i++) { len += Jim_Length(objv[i]); } if (objc) len += objc - 1; p = bytes = Jim_Alloc(len + 1); for (i = 0; i < objc; i++) { const char *s = Jim_GetString(objv[i], &objLen); while (objLen && isspace(UCHAR(*s))) { s++; objLen--; len--; } while (objLen && isspace(UCHAR(s[objLen - 1]))) { if (objLen > 1 && s[objLen - 2] == '\\') { break; } objLen--; len--; } memcpy(p, s, objLen); |
︙ | ︙ | |||
11834 11835 11836 11837 11838 11839 11840 | { int first, last; int len, rangeLen; if (Jim_GetIndex(interp, firstObjPtr, &first) != JIM_OK || Jim_GetIndex(interp, lastObjPtr, &last) != JIM_OK) return NULL; | | | 11823 11824 11825 11826 11827 11828 11829 11830 11831 11832 11833 11834 11835 11836 11837 | { int first, last; int len, rangeLen; if (Jim_GetIndex(interp, firstObjPtr, &first) != JIM_OK || Jim_GetIndex(interp, lastObjPtr, &last) != JIM_OK) return NULL; len = Jim_ListLength(interp, listObjPtr); first = JimRelToAbsIndex(len, first); last = JimRelToAbsIndex(len, last); JimRelToAbsRange(len, &first, &last, &rangeLen); if (first == 0 && last == len) { return listObjPtr; } return Jim_NewListObj(interp, listObjPtr->internalRep.listValue.ele + first, rangeLen); |
︙ | ︙ | |||
11874 11875 11876 11877 11878 11879 11880 | static void JimObjectHTKeyValDestructor(void *interp, void *val) { Jim_DecrRefCount(interp, (Jim_Obj *)val); } static const Jim_HashTableType JimDictHashTableType = { | | | | | | | | 11863 11864 11865 11866 11867 11868 11869 11870 11871 11872 11873 11874 11875 11876 11877 11878 11879 11880 11881 11882 | static void JimObjectHTKeyValDestructor(void *interp, void *val) { Jim_DecrRefCount(interp, (Jim_Obj *)val); } static const Jim_HashTableType JimDictHashTableType = { JimObjectHTHashFunction, JimObjectHTKeyValDup, JimObjectHTKeyValDup, JimObjectHTKeyCompare, JimObjectHTKeyValDestructor, JimObjectHTKeyValDestructor }; static const Jim_ObjType dictObjType = { "dict", FreeDictInternalRep, DupDictInternalRep, UpdateStringOfDict, |
︙ | ︙ | |||
11904 11905 11906 11907 11908 11909 11910 | void DupDictInternalRep(Jim_Interp *interp, Jim_Obj *srcPtr, Jim_Obj *dupPtr) { Jim_HashTable *ht, *dupHt; Jim_HashTableIterator htiter; Jim_HashEntry *he; | | | | | | | | | 11893 11894 11895 11896 11897 11898 11899 11900 11901 11902 11903 11904 11905 11906 11907 11908 11909 11910 11911 11912 11913 11914 11915 11916 11917 11918 11919 11920 11921 11922 11923 11924 11925 11926 11927 11928 11929 11930 11931 11932 11933 11934 11935 11936 11937 11938 11939 11940 11941 11942 11943 11944 11945 11946 11947 11948 11949 11950 11951 11952 11953 11954 11955 11956 11957 11958 11959 11960 11961 11962 11963 11964 11965 11966 11967 11968 11969 11970 11971 11972 11973 11974 11975 11976 | void DupDictInternalRep(Jim_Interp *interp, Jim_Obj *srcPtr, Jim_Obj *dupPtr) { Jim_HashTable *ht, *dupHt; Jim_HashTableIterator htiter; Jim_HashEntry *he; ht = srcPtr->internalRep.ptr; dupHt = Jim_Alloc(sizeof(*dupHt)); Jim_InitHashTable(dupHt, &JimDictHashTableType, interp); if (ht->size != 0) Jim_ExpandHashTable(dupHt, ht->size); JimInitHashTableIterator(ht, &htiter); while ((he = Jim_NextHashEntry(&htiter)) != NULL) { Jim_AddHashEntry(dupHt, he->key, he->u.val); } dupPtr->internalRep.ptr = dupHt; dupPtr->typePtr = &dictObjType; } static Jim_Obj **JimDictPairs(Jim_Obj *dictPtr, int *len) { Jim_HashTable *ht; Jim_HashTableIterator htiter; Jim_HashEntry *he; Jim_Obj **objv; int i; ht = dictPtr->internalRep.ptr; objv = Jim_Alloc((ht->used * 2) * sizeof(Jim_Obj *)); JimInitHashTableIterator(ht, &htiter); i = 0; while ((he = Jim_NextHashEntry(&htiter)) != NULL) { objv[i++] = Jim_GetHashEntryKey(he); objv[i++] = Jim_GetHashEntryVal(he); } *len = i; return objv; } static void UpdateStringOfDict(struct Jim_Obj *objPtr) { int len; Jim_Obj **objv = JimDictPairs(objPtr, &len); JimMakeListStringRep(objPtr, objv, len); Jim_Free(objv); } static int SetDictFromAny(Jim_Interp *interp, struct Jim_Obj *objPtr) { int listlen; if (objPtr->typePtr == &dictObjType) { return JIM_OK; } if (Jim_IsList(objPtr) && Jim_IsShared(objPtr)) { Jim_String(objPtr); } listlen = Jim_ListLength(interp, objPtr); if (listlen % 2) { Jim_SetResultString(interp, "missing value to go with key", -1); return JIM_ERR; } else { Jim_HashTable *ht; int i; ht = Jim_Alloc(sizeof(*ht)); Jim_InitHashTable(ht, &JimDictHashTableType, interp); for (i = 0; i < listlen; i += 2) { |
︙ | ︙ | |||
12002 12003 12004 12005 12006 12007 12008 | static int DictAddElement(Jim_Interp *interp, Jim_Obj *objPtr, Jim_Obj *keyObjPtr, Jim_Obj *valueObjPtr) { Jim_HashTable *ht = objPtr->internalRep.ptr; | | | 11991 11992 11993 11994 11995 11996 11997 11998 11999 12000 12001 12002 12003 12004 12005 | static int DictAddElement(Jim_Interp *interp, Jim_Obj *objPtr, Jim_Obj *keyObjPtr, Jim_Obj *valueObjPtr) { Jim_HashTable *ht = objPtr->internalRep.ptr; if (valueObjPtr == NULL) { return Jim_DeleteHashEntry(ht, keyObjPtr); } Jim_ReplaceHashEntry(ht, keyObjPtr, valueObjPtr); return JIM_OK; } int Jim_DictAddElement(Jim_Interp *interp, Jim_Obj *objPtr, |
︙ | ︙ | |||
12102 12103 12104 12105 12106 12107 12108 | { Jim_Obj *varObjPtr, *objPtr, *dictObjPtr; int shared, i; varObjPtr = objPtr = Jim_GetVariable(interp, varNamePtr, flags); if (objPtr == NULL) { if (newObjPtr == NULL && (flags & JIM_MUSTEXIST)) { | | | | | | | 12091 12092 12093 12094 12095 12096 12097 12098 12099 12100 12101 12102 12103 12104 12105 12106 12107 12108 12109 12110 12111 12112 12113 12114 12115 12116 12117 12118 12119 12120 12121 12122 12123 12124 12125 12126 12127 12128 12129 12130 12131 12132 12133 12134 12135 12136 12137 12138 12139 12140 12141 12142 12143 12144 12145 12146 12147 12148 12149 12150 12151 | { Jim_Obj *varObjPtr, *objPtr, *dictObjPtr; int shared, i; varObjPtr = objPtr = Jim_GetVariable(interp, varNamePtr, flags); if (objPtr == NULL) { if (newObjPtr == NULL && (flags & JIM_MUSTEXIST)) { return JIM_ERR; } varObjPtr = objPtr = Jim_NewDictObj(interp, NULL, 0); if (Jim_SetVariable(interp, varNamePtr, objPtr) != JIM_OK) { Jim_FreeNewObj(interp, varObjPtr); return JIM_ERR; } } if ((shared = Jim_IsShared(objPtr))) varObjPtr = objPtr = Jim_DuplicateObj(interp, objPtr); for (i = 0; i < keyc; i++) { dictObjPtr = objPtr; if (SetDictFromAny(interp, dictObjPtr) != JIM_OK) { goto err; } if (i == keyc - 1) { if (Jim_DictAddElement(interp, objPtr, keyv[keyc - 1], newObjPtr) != JIM_OK) { if (newObjPtr || (flags & JIM_MUSTEXIST)) { goto err; } } break; } Jim_InvalidateStringRep(dictObjPtr); if (Jim_DictKey(interp, dictObjPtr, keyv[i], &objPtr, newObjPtr ? JIM_NONE : JIM_ERRMSG) == JIM_OK) { if (Jim_IsShared(objPtr)) { objPtr = Jim_DuplicateObj(interp, objPtr); DictAddElement(interp, dictObjPtr, keyv[i], objPtr); } } else { if (newObjPtr == NULL) { goto err; } objPtr = Jim_NewDictObj(interp, NULL, 0); DictAddElement(interp, dictObjPtr, keyv[i], objPtr); } } Jim_InvalidateStringRep(objPtr); Jim_InvalidateStringRep(varObjPtr); if (Jim_SetVariable(interp, varNamePtr, varObjPtr) != JIM_OK) { goto err; } Jim_SetResult(interp, varObjPtr); return JIM_OK; |
︙ | ︙ | |||
12185 12186 12187 12188 12189 12190 12191 | } else { char buf[JIM_INTEGER_SPACE + 1]; if (objPtr->internalRep.intValue >= 0) { sprintf(buf, "%d", objPtr->internalRep.intValue); } else { | | | | | | | | | | 12174 12175 12176 12177 12178 12179 12180 12181 12182 12183 12184 12185 12186 12187 12188 12189 12190 12191 12192 12193 12194 12195 12196 12197 12198 12199 12200 12201 12202 12203 12204 12205 12206 12207 12208 12209 12210 12211 12212 12213 12214 12215 12216 12217 12218 12219 12220 12221 12222 12223 12224 12225 12226 12227 12228 12229 12230 12231 12232 12233 12234 12235 12236 12237 12238 12239 12240 12241 12242 12243 12244 12245 12246 12247 12248 12249 12250 12251 12252 12253 12254 12255 12256 12257 12258 12259 12260 12261 12262 12263 | } else { char buf[JIM_INTEGER_SPACE + 1]; if (objPtr->internalRep.intValue >= 0) { sprintf(buf, "%d", objPtr->internalRep.intValue); } else { sprintf(buf, "end%d", objPtr->internalRep.intValue + 1); } JimSetStringBytes(objPtr, buf); } } static int SetIndexFromAny(Jim_Interp *interp, Jim_Obj *objPtr) { int idx, end = 0; const char *str; char *endptr; str = Jim_String(objPtr); if (strncmp(str, "end", 3) == 0) { end = 1; str += 3; idx = 0; } else { idx = jim_strtol(str, &endptr); if (endptr == str) { goto badindex; } str = endptr; } if (*str == '+' || *str == '-') { int sign = (*str == '+' ? 1 : -1); idx += sign * jim_strtol(++str, &endptr); if (str == endptr || *endptr) { goto badindex; } str = endptr; } while (isspace(UCHAR(*str))) { str++; } if (*str) { goto badindex; } if (end) { if (idx > 0) { idx = INT_MAX; } else { idx--; } } else if (idx < 0) { idx = -INT_MAX; } Jim_FreeIntRep(interp, objPtr); objPtr->typePtr = &indexObjType; objPtr->internalRep.intValue = idx; return JIM_OK; badindex: Jim_SetResultFormatted(interp, "bad index \"%#s\": must be integer?[+-]integer? or end?[+-]integer?", objPtr); return JIM_ERR; } int Jim_GetIndex(Jim_Interp *interp, Jim_Obj *objPtr, int *indexPtr) { if (objPtr->typePtr == &intObjType) { jim_wide val = JimWideValue(objPtr); if (val < 0) *indexPtr = -INT_MAX; else if (val > INT_MAX) *indexPtr = INT_MAX; |
︙ | ︙ | |||
12317 12318 12319 12320 12321 12322 12323 | } static int SetReturnCodeFromAny(Jim_Interp *interp, Jim_Obj *objPtr) { int returnCode; jim_wide wideValue; | | | > | | | | | | | | | | | | | | | | | | | | > > > | 12306 12307 12308 12309 12310 12311 12312 12313 12314 12315 12316 12317 12318 12319 12320 12321 12322 12323 12324 12325 12326 12327 12328 12329 12330 12331 12332 12333 12334 12335 12336 12337 12338 12339 12340 12341 12342 12343 12344 12345 12346 12347 12348 12349 12350 12351 12352 12353 12354 12355 12356 12357 12358 12359 12360 12361 12362 12363 12364 12365 12366 12367 12368 12369 12370 12371 12372 12373 12374 12375 12376 12377 12378 12379 12380 12381 12382 12383 12384 12385 12386 12387 12388 12389 12390 12391 12392 12393 12394 12395 12396 12397 12398 12399 12400 12401 12402 12403 12404 12405 12406 12407 12408 12409 12410 12411 12412 12413 12414 12415 12416 12417 12418 12419 12420 12421 12422 12423 12424 12425 12426 12427 12428 12429 12430 12431 12432 12433 12434 12435 12436 12437 | } static int SetReturnCodeFromAny(Jim_Interp *interp, Jim_Obj *objPtr) { int returnCode; jim_wide wideValue; if (JimGetWideNoErr(interp, objPtr, &wideValue) != JIM_ERR) returnCode = (int)wideValue; else if (Jim_GetEnum(interp, objPtr, jimReturnCodes, &returnCode, NULL, JIM_NONE) != JIM_OK) { Jim_SetResultFormatted(interp, "expected return code but got \"%#s\"", objPtr); return JIM_ERR; } Jim_FreeIntRep(interp, objPtr); objPtr->typePtr = &returnCodeObjType; objPtr->internalRep.intValue = returnCode; return JIM_OK; } int Jim_GetReturnCode(Jim_Interp *interp, Jim_Obj *objPtr, int *intPtr) { if (objPtr->typePtr != &returnCodeObjType && SetReturnCodeFromAny(interp, objPtr) == JIM_ERR) return JIM_ERR; *intPtr = objPtr->internalRep.intValue; return JIM_OK; } static int JimParseExprOperator(struct JimParserCtx *pc); static int JimParseExprNumber(struct JimParserCtx *pc); static int JimParseExprIrrational(struct JimParserCtx *pc); static int JimParseExprBoolean(struct JimParserCtx *pc); enum { JIM_EXPROP_MUL = JIM_TT_EXPR_OP, JIM_EXPROP_DIV, JIM_EXPROP_MOD, JIM_EXPROP_SUB, JIM_EXPROP_ADD, JIM_EXPROP_LSHIFT, JIM_EXPROP_RSHIFT, JIM_EXPROP_ROTL, JIM_EXPROP_ROTR, JIM_EXPROP_LT, JIM_EXPROP_GT, JIM_EXPROP_LTE, JIM_EXPROP_GTE, JIM_EXPROP_NUMEQ, JIM_EXPROP_NUMNE, JIM_EXPROP_BITAND, JIM_EXPROP_BITXOR, JIM_EXPROP_BITOR, JIM_EXPROP_LOGICAND, JIM_EXPROP_LOGICAND_LEFT, JIM_EXPROP_LOGICAND_RIGHT, JIM_EXPROP_LOGICOR, JIM_EXPROP_LOGICOR_LEFT, JIM_EXPROP_LOGICOR_RIGHT, JIM_EXPROP_TERNARY, JIM_EXPROP_TERNARY_LEFT, JIM_EXPROP_TERNARY_RIGHT, JIM_EXPROP_COLON, JIM_EXPROP_COLON_LEFT, JIM_EXPROP_COLON_RIGHT, JIM_EXPROP_POW, JIM_EXPROP_STREQ, JIM_EXPROP_STRNE, JIM_EXPROP_STRIN, JIM_EXPROP_STRNI, JIM_EXPROP_NOT, JIM_EXPROP_BITNOT, JIM_EXPROP_UNARYMINUS, JIM_EXPROP_UNARYPLUS, JIM_EXPROP_FUNC_FIRST, JIM_EXPROP_FUNC_INT = JIM_EXPROP_FUNC_FIRST, JIM_EXPROP_FUNC_WIDE, JIM_EXPROP_FUNC_ABS, JIM_EXPROP_FUNC_DOUBLE, JIM_EXPROP_FUNC_ROUND, JIM_EXPROP_FUNC_RAND, JIM_EXPROP_FUNC_SRAND, JIM_EXPROP_FUNC_SIN, JIM_EXPROP_FUNC_COS, JIM_EXPROP_FUNC_TAN, JIM_EXPROP_FUNC_ASIN, JIM_EXPROP_FUNC_ACOS, JIM_EXPROP_FUNC_ATAN, JIM_EXPROP_FUNC_ATAN2, JIM_EXPROP_FUNC_SINH, JIM_EXPROP_FUNC_COSH, JIM_EXPROP_FUNC_TANH, JIM_EXPROP_FUNC_CEIL, JIM_EXPROP_FUNC_FLOOR, JIM_EXPROP_FUNC_EXP, JIM_EXPROP_FUNC_LOG, JIM_EXPROP_FUNC_LOG10, JIM_EXPROP_FUNC_SQRT, JIM_EXPROP_FUNC_POW, JIM_EXPROP_FUNC_HYPOT, JIM_EXPROP_FUNC_FMOD, }; struct JimExprState { Jim_Obj **stack; int stacklen; int opcode; |
︙ | ︙ | |||
12511 12512 12513 12514 12515 12516 12517 12518 12519 12520 12521 12522 12523 12524 12525 | break; case JIM_EXPROP_FUNC_DOUBLE: case JIM_EXPROP_UNARYPLUS: dC = dA; intresult = 0; break; case JIM_EXPROP_FUNC_ABS: dC = dA >= 0 ? dA : -dA; intresult = 0; break; case JIM_EXPROP_UNARYMINUS: dC = -dA; intresult = 0; break; case JIM_EXPROP_NOT: | > > > > | 12504 12505 12506 12507 12508 12509 12510 12511 12512 12513 12514 12515 12516 12517 12518 12519 12520 12521 12522 | break; case JIM_EXPROP_FUNC_DOUBLE: case JIM_EXPROP_UNARYPLUS: dC = dA; intresult = 0; break; case JIM_EXPROP_FUNC_ABS: #ifdef JIM_MATH_FUNCTIONS dC = fabs(dA); #else dC = dA >= 0 ? dA : -dA; #endif intresult = 0; break; case JIM_EXPROP_UNARYMINUS: dC = -dA; intresult = 0; break; case JIM_EXPROP_NOT: |
︙ | ︙ | |||
12703 12704 12705 12706 12707 12708 12709 | if (negative) { wC = -wC; } } break; case JIM_EXPROP_ROTL: case JIM_EXPROP_ROTR:{ | | | | 12700 12701 12702 12703 12704 12705 12706 12707 12708 12709 12710 12711 12712 12713 12714 12715 12716 12717 12718 12719 | if (negative) { wC = -wC; } } break; case JIM_EXPROP_ROTL: case JIM_EXPROP_ROTR:{ unsigned long uA = (unsigned long)wA; unsigned long uB = (unsigned long)wB; const unsigned int S = sizeof(unsigned long) * 8; uB %= S; if (e->opcode == JIM_EXPROP_ROTR) { uB = S - uB; } wC = (unsigned long)(uA << uB) | (uA >> (S - uB)); break; |
︙ | ︙ | |||
12734 12735 12736 12737 12738 12739 12740 | return rc; } static int JimExprOpBin(Jim_Interp *interp, struct JimExprState *e) { | < | > > > > > | | | | > > < | | | | | < | < | < > | < > | > > > > > > > > > > > > > > > < | | | | | < | < | < | < | < | < < < | | | | | | | < | < < | < < < < < < < | < > > > > > > | 12731 12732 12733 12734 12735 12736 12737 12738 12739 12740 12741 12742 12743 12744 12745 12746 12747 12748 12749 12750 12751 12752 12753 12754 12755 12756 12757 12758 12759 12760 12761 12762 12763 12764 12765 12766 12767 12768 12769 12770 12771 12772 12773 12774 12775 12776 12777 12778 12779 12780 12781 12782 12783 12784 12785 12786 12787 12788 12789 12790 12791 12792 12793 12794 12795 12796 12797 12798 12799 12800 12801 12802 12803 12804 12805 12806 12807 12808 12809 12810 12811 12812 12813 12814 12815 12816 12817 12818 12819 12820 12821 12822 12823 12824 12825 12826 12827 12828 12829 12830 12831 12832 12833 12834 12835 12836 12837 12838 12839 12840 12841 12842 12843 12844 12845 12846 12847 12848 12849 12850 12851 12852 12853 12854 12855 12856 12857 12858 12859 12860 12861 12862 12863 12864 12865 12866 12867 12868 12869 12870 12871 12872 12873 12874 12875 12876 12877 12878 12879 12880 12881 12882 12883 12884 12885 12886 12887 12888 12889 12890 12891 12892 12893 12894 12895 12896 12897 12898 12899 12900 12901 12902 12903 12904 12905 12906 12907 12908 12909 12910 12911 12912 12913 12914 12915 12916 12917 12918 12919 | return rc; } static int JimExprOpBin(Jim_Interp *interp, struct JimExprState *e) { int rc = JIM_OK; double dA, dB, dC = 0; jim_wide wA, wB, wC = 0; Jim_Obj *B = ExprPop(e); Jim_Obj *A = ExprPop(e); if ((A->typePtr != &doubleObjType || A->bytes) && (B->typePtr != &doubleObjType || B->bytes) && JimGetWideNoErr(interp, A, &wA) == JIM_OK && JimGetWideNoErr(interp, B, &wB) == JIM_OK) { switch (e->opcode) { case JIM_EXPROP_POW: case JIM_EXPROP_FUNC_POW: if (wA == 0 && wB < 0) { Jim_SetResultString(interp, "exponentiation of zero by negative power", -1); rc = JIM_ERR; goto done; } wC = JimPowWide(wA, wB); goto intresult; case JIM_EXPROP_ADD: wC = wA + wB; goto intresult; case JIM_EXPROP_SUB: wC = wA - wB; goto intresult; case JIM_EXPROP_MUL: wC = wA * wB; goto intresult; case JIM_EXPROP_DIV: if (wB == 0) { Jim_SetResultString(interp, "Division by zero", -1); rc = JIM_ERR; goto done; } else { if (wB < 0) { wB = -wB; wA = -wA; } wC = wA / wB; if (wA % wB < 0) { wC--; } goto intresult; } case JIM_EXPROP_LT: wC = wA < wB; goto intresult; case JIM_EXPROP_GT: wC = wA > wB; goto intresult; case JIM_EXPROP_LTE: wC = wA <= wB; goto intresult; case JIM_EXPROP_GTE: wC = wA >= wB; goto intresult; case JIM_EXPROP_NUMEQ: wC = wA == wB; goto intresult; case JIM_EXPROP_NUMNE: wC = wA != wB; goto intresult; } } if (Jim_GetDouble(interp, A, &dA) == JIM_OK && Jim_GetDouble(interp, B, &dB) == JIM_OK) { switch (e->opcode) { #ifndef JIM_MATH_FUNCTIONS case JIM_EXPROP_POW: case JIM_EXPROP_FUNC_POW: case JIM_EXPROP_FUNC_ATAN2: case JIM_EXPROP_FUNC_HYPOT: case JIM_EXPROP_FUNC_FMOD: Jim_SetResultString(interp, "unsupported", -1); rc = JIM_ERR; goto done; #else case JIM_EXPROP_POW: case JIM_EXPROP_FUNC_POW: dC = pow(dA, dB); goto doubleresult; case JIM_EXPROP_FUNC_ATAN2: dC = atan2(dA, dB); goto doubleresult; case JIM_EXPROP_FUNC_HYPOT: dC = hypot(dA, dB); goto doubleresult; case JIM_EXPROP_FUNC_FMOD: dC = fmod(dA, dB); goto doubleresult; #endif case JIM_EXPROP_ADD: dC = dA + dB; goto doubleresult; case JIM_EXPROP_SUB: dC = dA - dB; goto doubleresult; case JIM_EXPROP_MUL: dC = dA * dB; goto doubleresult; case JIM_EXPROP_DIV: if (dB == 0) { #ifdef INFINITY dC = dA < 0 ? -INFINITY : INFINITY; #else dC = (dA < 0 ? -1.0 : 1.0) * strtod("Inf", NULL); #endif } else { dC = dA / dB; } goto doubleresult; case JIM_EXPROP_LT: wC = dA < dB; goto intresult; case JIM_EXPROP_GT: wC = dA > dB; goto intresult; case JIM_EXPROP_LTE: wC = dA <= dB; goto intresult; case JIM_EXPROP_GTE: wC = dA >= dB; goto intresult; case JIM_EXPROP_NUMEQ: wC = dA == dB; goto intresult; case JIM_EXPROP_NUMNE: wC = dA != dB; goto intresult; } } else { int i = Jim_StringCompareObj(interp, A, B, 0); switch (e->opcode) { case JIM_EXPROP_LT: wC = i < 0; goto intresult; case JIM_EXPROP_GT: wC = i > 0; goto intresult; case JIM_EXPROP_LTE: wC = i <= 0; goto intresult; case JIM_EXPROP_GTE: wC = i >= 0; goto intresult; case JIM_EXPROP_NUMEQ: wC = i == 0; goto intresult; case JIM_EXPROP_NUMNE: wC = i != 0; goto intresult; } } rc = JIM_ERR; done: Jim_DecrRefCount(interp, A); Jim_DecrRefCount(interp, B); return rc; intresult: ExprPush(e, Jim_NewIntObj(interp, wC)); goto done; doubleresult: ExprPush(e, Jim_NewDoubleObj(interp, dC)); goto done; } static int JimSearchList(Jim_Interp *interp, Jim_Obj *listObjPtr, Jim_Obj *valObj) { int listlen; int i; |
︙ | ︙ | |||
12957 12958 12959 12960 12961 12962 12963 12964 12965 12966 12967 12968 12969 12970 12971 12972 12973 12974 12975 12976 12977 12978 12979 12980 12981 | return JIM_OK; } static int ExprBool(Jim_Interp *interp, Jim_Obj *obj) { long l; double d; if (Jim_GetLong(interp, obj, &l) == JIM_OK) { return l != 0; } if (Jim_GetDouble(interp, obj, &d) == JIM_OK) { return d != 0; } return -1; } static int JimExprOpAndLeft(Jim_Interp *interp, struct JimExprState *e) { Jim_Obj *skip = ExprPop(e); Jim_Obj *A = ExprPop(e); int rc = JIM_OK; switch (ExprBool(interp, A)) { case 0: | > > > > | | | | | | | 12958 12959 12960 12961 12962 12963 12964 12965 12966 12967 12968 12969 12970 12971 12972 12973 12974 12975 12976 12977 12978 12979 12980 12981 12982 12983 12984 12985 12986 12987 12988 12989 12990 12991 12992 12993 12994 12995 12996 12997 12998 12999 13000 13001 13002 13003 13004 13005 13006 13007 13008 13009 13010 13011 13012 13013 13014 13015 13016 13017 13018 13019 13020 13021 13022 13023 13024 13025 13026 13027 13028 13029 13030 13031 | return JIM_OK; } static int ExprBool(Jim_Interp *interp, Jim_Obj *obj) { long l; double d; int b; if (Jim_GetLong(interp, obj, &l) == JIM_OK) { return l != 0; } if (Jim_GetDouble(interp, obj, &d) == JIM_OK) { return d != 0; } if (Jim_GetBoolean(interp, obj, &b) == JIM_OK) { return b != 0; } return -1; } static int JimExprOpAndLeft(Jim_Interp *interp, struct JimExprState *e) { Jim_Obj *skip = ExprPop(e); Jim_Obj *A = ExprPop(e); int rc = JIM_OK; switch (ExprBool(interp, A)) { case 0: e->skip = JimWideValue(skip); ExprPush(e, Jim_NewIntObj(interp, 0)); break; case 1: break; case -1: rc = JIM_ERR; } Jim_DecrRefCount(interp, A); Jim_DecrRefCount(interp, skip); return rc; } static int JimExprOpOrLeft(Jim_Interp *interp, struct JimExprState *e) { Jim_Obj *skip = ExprPop(e); Jim_Obj *A = ExprPop(e); int rc = JIM_OK; switch (ExprBool(interp, A)) { case 0: break; case 1: e->skip = JimWideValue(skip); ExprPush(e, Jim_NewIntObj(interp, 1)); break; case -1: rc = JIM_ERR; break; } Jim_DecrRefCount(interp, A); Jim_DecrRefCount(interp, skip); return rc; |
︙ | ︙ | |||
13037 13038 13039 13040 13041 13042 13043 | break; case 1: ExprPush(e, Jim_NewIntObj(interp, 1)); break; case -1: | | | | | | | | | | | > | | | 13042 13043 13044 13045 13046 13047 13048 13049 13050 13051 13052 13053 13054 13055 13056 13057 13058 13059 13060 13061 13062 13063 13064 13065 13066 13067 13068 13069 13070 13071 13072 13073 13074 13075 13076 13077 13078 13079 13080 13081 13082 13083 13084 13085 13086 13087 13088 13089 13090 13091 13092 13093 13094 13095 13096 13097 13098 13099 13100 13101 13102 13103 13104 13105 13106 13107 13108 13109 13110 13111 13112 13113 13114 13115 13116 13117 13118 13119 13120 13121 13122 13123 13124 13125 13126 13127 13128 13129 13130 13131 13132 | break; case 1: ExprPush(e, Jim_NewIntObj(interp, 1)); break; case -1: rc = JIM_ERR; break; } Jim_DecrRefCount(interp, A); return rc; } static int JimExprOpTernaryLeft(Jim_Interp *interp, struct JimExprState *e) { Jim_Obj *skip = ExprPop(e); Jim_Obj *A = ExprPop(e); int rc = JIM_OK; ExprPush(e, A); switch (ExprBool(interp, A)) { case 0: e->skip = JimWideValue(skip); ExprPush(e, Jim_NewIntObj(interp, 0)); break; case 1: break; case -1: rc = JIM_ERR; break; } Jim_DecrRefCount(interp, A); Jim_DecrRefCount(interp, skip); return rc; } static int JimExprOpColonLeft(Jim_Interp *interp, struct JimExprState *e) { Jim_Obj *skip = ExprPop(e); Jim_Obj *B = ExprPop(e); Jim_Obj *A = ExprPop(e); if (ExprBool(interp, A)) { e->skip = JimWideValue(skip); ExprPush(e, B); } Jim_DecrRefCount(interp, skip); Jim_DecrRefCount(interp, A); Jim_DecrRefCount(interp, B); return JIM_OK; } static int JimExprOpNull(Jim_Interp *interp, struct JimExprState *e) { return JIM_OK; } enum { LAZY_NONE, LAZY_OP, LAZY_LEFT, LAZY_RIGHT, RIGHT_ASSOC, }; #define OPRINIT_ATTR(N, P, ARITY, F, ATTR) {N, F, P, ARITY, ATTR, sizeof(N) - 1} #define OPRINIT(N, P, ARITY, F) OPRINIT_ATTR(N, P, ARITY, F, LAZY_NONE) static const struct Jim_ExprOperator Jim_ExprOperators[] = { OPRINIT("*", 110, 2, JimExprOpBin), OPRINIT("/", 110, 2, JimExprOpBin), OPRINIT("%", 110, 2, JimExprOpIntBin), OPRINIT("-", 100, 2, JimExprOpBin), |
︙ | ︙ | |||
13140 13141 13142 13143 13144 13145 13146 | OPRINIT("==", 70, 2, JimExprOpBin), OPRINIT("!=", 70, 2, JimExprOpBin), OPRINIT("&", 50, 2, JimExprOpIntBin), OPRINIT("^", 49, 2, JimExprOpIntBin), OPRINIT("|", 48, 2, JimExprOpIntBin), | | | | | | | | | | | | | > | | 13146 13147 13148 13149 13150 13151 13152 13153 13154 13155 13156 13157 13158 13159 13160 13161 13162 13163 13164 13165 13166 13167 13168 13169 13170 13171 13172 13173 13174 13175 13176 13177 | OPRINIT("==", 70, 2, JimExprOpBin), OPRINIT("!=", 70, 2, JimExprOpBin), OPRINIT("&", 50, 2, JimExprOpIntBin), OPRINIT("^", 49, 2, JimExprOpIntBin), OPRINIT("|", 48, 2, JimExprOpIntBin), OPRINIT_ATTR("&&", 10, 2, NULL, LAZY_OP), OPRINIT_ATTR(NULL, 10, 2, JimExprOpAndLeft, LAZY_LEFT), OPRINIT_ATTR(NULL, 10, 2, JimExprOpAndOrRight, LAZY_RIGHT), OPRINIT_ATTR("||", 9, 2, NULL, LAZY_OP), OPRINIT_ATTR(NULL, 9, 2, JimExprOpOrLeft, LAZY_LEFT), OPRINIT_ATTR(NULL, 9, 2, JimExprOpAndOrRight, LAZY_RIGHT), OPRINIT_ATTR("?", 5, 2, JimExprOpNull, LAZY_OP), OPRINIT_ATTR(NULL, 5, 2, JimExprOpTernaryLeft, LAZY_LEFT), OPRINIT_ATTR(NULL, 5, 2, JimExprOpNull, LAZY_RIGHT), OPRINIT_ATTR(":", 5, 2, JimExprOpNull, LAZY_OP), OPRINIT_ATTR(NULL, 5, 2, JimExprOpColonLeft, LAZY_LEFT), OPRINIT_ATTR(NULL, 5, 2, JimExprOpNull, LAZY_RIGHT), OPRINIT_ATTR("**", 120, 2, JimExprOpBin, RIGHT_ASSOC), OPRINIT("eq", 60, 2, JimExprOpStrBin), OPRINIT("ne", 60, 2, JimExprOpStrBin), OPRINIT("in", 55, 2, JimExprOpStrBin), OPRINIT("ni", 55, 2, JimExprOpStrBin), |
︙ | ︙ | |||
13186 13187 13188 13189 13190 13191 13192 13193 13194 13195 13196 13197 13198 13199 13200 13201 13202 13203 13204 13205 13206 13207 13208 13209 13210 13211 13212 | #ifdef JIM_MATH_FUNCTIONS OPRINIT("sin", 200, 1, JimExprOpDoubleUnary), OPRINIT("cos", 200, 1, JimExprOpDoubleUnary), OPRINIT("tan", 200, 1, JimExprOpDoubleUnary), OPRINIT("asin", 200, 1, JimExprOpDoubleUnary), OPRINIT("acos", 200, 1, JimExprOpDoubleUnary), OPRINIT("atan", 200, 1, JimExprOpDoubleUnary), OPRINIT("sinh", 200, 1, JimExprOpDoubleUnary), OPRINIT("cosh", 200, 1, JimExprOpDoubleUnary), OPRINIT("tanh", 200, 1, JimExprOpDoubleUnary), OPRINIT("ceil", 200, 1, JimExprOpDoubleUnary), OPRINIT("floor", 200, 1, JimExprOpDoubleUnary), OPRINIT("exp", 200, 1, JimExprOpDoubleUnary), OPRINIT("log", 200, 1, JimExprOpDoubleUnary), OPRINIT("log10", 200, 1, JimExprOpDoubleUnary), OPRINIT("sqrt", 200, 1, JimExprOpDoubleUnary), OPRINIT("pow", 200, 2, JimExprOpBin), #endif }; #undef OPRINIT #undef OPRINIT_LAZY #define JIM_EXPR_OPERATORS_NUM \ (sizeof(Jim_ExprOperators)/sizeof(struct Jim_ExprOperator)) static int JimParseExpression(struct JimParserCtx *pc) { | > > > | | | 13193 13194 13195 13196 13197 13198 13199 13200 13201 13202 13203 13204 13205 13206 13207 13208 13209 13210 13211 13212 13213 13214 13215 13216 13217 13218 13219 13220 13221 13222 13223 13224 13225 13226 13227 13228 13229 13230 13231 13232 13233 13234 13235 13236 13237 13238 13239 | #ifdef JIM_MATH_FUNCTIONS OPRINIT("sin", 200, 1, JimExprOpDoubleUnary), OPRINIT("cos", 200, 1, JimExprOpDoubleUnary), OPRINIT("tan", 200, 1, JimExprOpDoubleUnary), OPRINIT("asin", 200, 1, JimExprOpDoubleUnary), OPRINIT("acos", 200, 1, JimExprOpDoubleUnary), OPRINIT("atan", 200, 1, JimExprOpDoubleUnary), OPRINIT("atan2", 200, 2, JimExprOpBin), OPRINIT("sinh", 200, 1, JimExprOpDoubleUnary), OPRINIT("cosh", 200, 1, JimExprOpDoubleUnary), OPRINIT("tanh", 200, 1, JimExprOpDoubleUnary), OPRINIT("ceil", 200, 1, JimExprOpDoubleUnary), OPRINIT("floor", 200, 1, JimExprOpDoubleUnary), OPRINIT("exp", 200, 1, JimExprOpDoubleUnary), OPRINIT("log", 200, 1, JimExprOpDoubleUnary), OPRINIT("log10", 200, 1, JimExprOpDoubleUnary), OPRINIT("sqrt", 200, 1, JimExprOpDoubleUnary), OPRINIT("pow", 200, 2, JimExprOpBin), OPRINIT("hypot", 200, 2, JimExprOpBin), OPRINIT("fmod", 200, 2, JimExprOpBin), #endif }; #undef OPRINIT #undef OPRINIT_LAZY #define JIM_EXPR_OPERATORS_NUM \ (sizeof(Jim_ExprOperators)/sizeof(struct Jim_ExprOperator)) static int JimParseExpression(struct JimParserCtx *pc) { while (isspace(UCHAR(*pc->p)) || (*(pc->p) == '\\' && *(pc->p + 1) == '\n')) { if (*pc->p == '\n') { pc->linenr++; } pc->p++; pc->len--; } pc->tline = pc->linenr; pc->tstart = pc->p; if (pc->len == 0) { pc->tend = pc->p; pc->tt = JIM_TT_EOL; pc->eof = 1; |
︙ | ︙ | |||
13245 13246 13247 13248 13249 13250 13251 | break; case '[': return JimParseCmd(pc); case '$': if (JimParseVar(pc) == JIM_ERR) return JimParseExprOperator(pc); else { | | | 13255 13256 13257 13258 13259 13260 13261 13262 13263 13264 13265 13266 13267 13268 13269 | break; case '[': return JimParseCmd(pc); case '$': if (JimParseVar(pc) == JIM_ERR) return JimParseExprOperator(pc); else { if (pc->tt == JIM_TT_EXPRSUGAR) { return JIM_ERR; } return JIM_OK; } break; case '0': |
︙ | ︙ | |||
13274 13275 13276 13277 13278 13279 13280 13281 13282 13283 13284 13285 13286 13287 13288 13289 13290 13291 13292 13293 | return JimParseBrace(pc); case 'N': case 'I': case 'n': case 'i': if (JimParseExprIrrational(pc) == JIM_ERR) return JimParseExprOperator(pc); break; default: return JimParseExprOperator(pc); break; } return JIM_OK; } static int JimParseExprNumber(struct JimParserCtx *pc) { char *end; | > > > > > > > > | | | | | 13284 13285 13286 13287 13288 13289 13290 13291 13292 13293 13294 13295 13296 13297 13298 13299 13300 13301 13302 13303 13304 13305 13306 13307 13308 13309 13310 13311 13312 13313 13314 13315 13316 13317 13318 13319 13320 13321 13322 13323 13324 13325 13326 13327 13328 13329 | return JimParseBrace(pc); case 'N': case 'I': case 'n': case 'i': if (JimParseExprIrrational(pc) == JIM_ERR) if (JimParseExprBoolean(pc) == JIM_ERR) return JimParseExprOperator(pc); break; case 't': case 'f': case 'o': case 'y': if (JimParseExprBoolean(pc) == JIM_ERR) return JimParseExprOperator(pc); break; default: return JimParseExprOperator(pc); break; } return JIM_OK; } static int JimParseExprNumber(struct JimParserCtx *pc) { char *end; pc->tt = JIM_TT_EXPR_INT; jim_strtoull(pc->p, (char **)&pc->p); if (strchr("eENnIi.", *pc->p) || pc->p == pc->tstart) { if (strtod(pc->tstart, &end)) { } if (end == pc->tstart) return JIM_ERR; if (end > pc->p) { pc->tt = JIM_TT_EXPR_DOUBLE; pc->p = end; } } pc->tend = pc->p - 1; pc->len -= (pc->p - pc->tstart); return JIM_OK; |
︙ | ︙ | |||
13325 13326 13327 13328 13329 13330 13331 13332 13333 13334 13335 13336 13337 | pc->tend = pc->p - 1; pc->tt = JIM_TT_EXPR_DOUBLE; return JIM_OK; } } return JIM_ERR; } static int JimParseExprOperator(struct JimParserCtx *pc) { int i; int bestIdx = -1, bestLen = 0; | > > > > > > > > > > > > > > > > > > > > > | | | 13343 13344 13345 13346 13347 13348 13349 13350 13351 13352 13353 13354 13355 13356 13357 13358 13359 13360 13361 13362 13363 13364 13365 13366 13367 13368 13369 13370 13371 13372 13373 13374 13375 13376 13377 13378 13379 13380 13381 13382 13383 13384 13385 13386 13387 13388 13389 13390 13391 13392 13393 13394 13395 13396 13397 13398 13399 13400 13401 13402 | pc->tend = pc->p - 1; pc->tt = JIM_TT_EXPR_DOUBLE; return JIM_OK; } } return JIM_ERR; } static int JimParseExprBoolean(struct JimParserCtx *pc) { const char *booleans[] = { "false", "no", "off", "true", "yes", "on", NULL }; const int lengths[] = { 5, 2, 3, 4, 3, 2, 0 }; int i; for (i = 0; booleans[i]; i++) { const char *boolean = booleans[i]; int length = lengths[i]; if (strncmp(boolean, pc->p, length) == 0) { pc->p += length; pc->len -= length; pc->tend = pc->p - 1; pc->tt = JIM_TT_EXPR_BOOLEAN; return JIM_OK; } } return JIM_ERR; } static int JimParseExprOperator(struct JimParserCtx *pc) { int i; int bestIdx = -1, bestLen = 0; for (i = 0; i < (signed)JIM_EXPR_OPERATORS_NUM; i++) { const char * const opname = Jim_ExprOperators[i].name; const int oplen = Jim_ExprOperators[i].namelen; if (opname == NULL || opname[0] != pc->p[0]) { continue; } if (oplen > bestLen && strncmp(opname, pc->p, oplen) == 0) { bestIdx = i + JIM_TT_EXPR_OP; bestLen = oplen; } } if (bestIdx == -1) { return JIM_ERR; } if (bestIdx >= JIM_EXPROP_FUNC_FIRST) { const char *p = pc->p + bestLen; int len = pc->len - bestLen; while (len && isspace(UCHAR(*p))) { len--; p++; |
︙ | ︙ | |||
13383 13384 13385 13386 13387 13388 13389 | return &Jim_ExprOperators[opcode - JIM_TT_EXPR_OP]; } const char *jim_tt_name(int type) { static const char * const tt_names[JIM_TT_EXPR_OP] = { "NIL", "STR", "ESC", "VAR", "ARY", "CMD", "SEP", "EOL", "EOF", "LIN", "WRD", "(((", ")))", ",,,", "INT", | | > > > > > > | 13422 13423 13424 13425 13426 13427 13428 13429 13430 13431 13432 13433 13434 13435 13436 13437 13438 13439 13440 13441 13442 13443 13444 | return &Jim_ExprOperators[opcode - JIM_TT_EXPR_OP]; } const char *jim_tt_name(int type) { static const char * const tt_names[JIM_TT_EXPR_OP] = { "NIL", "STR", "ESC", "VAR", "ARY", "CMD", "SEP", "EOL", "EOF", "LIN", "WRD", "(((", ")))", ",,,", "INT", "DBL", "BOO", "$()" }; if (type < JIM_TT_EXPR_OP) { return tt_names[type]; } else if (type == JIM_EXPROP_UNARYMINUS) { return "-VE"; } else if (type == JIM_EXPROP_UNARYPLUS) { return "+VE"; } else { const struct Jim_ExprOperator *op = JimExprOperatorInfoByOpcode(type); static char buf[20]; if (op->name) { return op->name; |
︙ | ︙ | |||
13414 13415 13416 13417 13418 13419 13420 | NULL, JIM_TYPE_REFERENCES, }; typedef struct ExprByteCode { | | | | | 13459 13460 13461 13462 13463 13464 13465 13466 13467 13468 13469 13470 13471 13472 13473 13474 13475 | NULL, JIM_TYPE_REFERENCES, }; typedef struct ExprByteCode { ScriptToken *token; int len; int inUse; } ExprByteCode; static void ExprFreeByteCode(Jim_Interp *interp, ExprByteCode * expr) { int i; for (i = 0; i < expr->len; i++) { |
︙ | ︙ | |||
13448 13449 13450 13451 13452 13453 13454 | } static void DupExprInternalRep(Jim_Interp *interp, Jim_Obj *srcPtr, Jim_Obj *dupPtr) { JIM_NOTUSED(interp); JIM_NOTUSED(srcPtr); | | < | > > > > | | | > > > > > > > > > > > > > > > > > > > > > | | | | | | 13493 13494 13495 13496 13497 13498 13499 13500 13501 13502 13503 13504 13505 13506 13507 13508 13509 13510 13511 13512 13513 13514 13515 13516 13517 13518 13519 13520 13521 13522 13523 13524 13525 13526 13527 13528 13529 13530 13531 13532 13533 13534 13535 13536 13537 13538 13539 13540 13541 13542 13543 13544 13545 13546 13547 13548 13549 13550 13551 13552 13553 13554 13555 13556 13557 13558 13559 13560 13561 13562 13563 13564 13565 13566 13567 13568 13569 13570 13571 13572 13573 13574 13575 13576 13577 13578 13579 13580 13581 13582 13583 13584 13585 13586 13587 13588 13589 13590 13591 13592 13593 13594 13595 13596 13597 13598 13599 13600 13601 13602 13603 13604 13605 13606 | } static void DupExprInternalRep(Jim_Interp *interp, Jim_Obj *srcPtr, Jim_Obj *dupPtr) { JIM_NOTUSED(interp); JIM_NOTUSED(srcPtr); dupPtr->typePtr = NULL; } static int ExprCheckCorrectness(Jim_Interp *interp, Jim_Obj *exprObjPtr, ExprByteCode * expr) { int i; int stacklen = 0; int ternary = 0; int lasttt = JIM_TT_NONE; const char *errmsg; for (i = 0; i < expr->len; i++) { ScriptToken *t = &expr->token[i]; const struct Jim_ExprOperator *op = JimExprOperatorInfoByOpcode(t->type); lasttt = t->type; stacklen -= op->arity; if (stacklen < 0) { break; } if (t->type == JIM_EXPROP_TERNARY || t->type == JIM_EXPROP_TERNARY_LEFT) { ternary++; } else if (t->type == JIM_EXPROP_COLON || t->type == JIM_EXPROP_COLON_LEFT) { ternary--; } stacklen++; } if (stacklen == 1 && ternary == 0) { return JIM_OK; } if (stacklen <= 0) { if (lasttt >= JIM_EXPROP_FUNC_FIRST) { errmsg = "too few arguments for math function"; Jim_SetResultString(interp, "too few arguments for math function", -1); } else { errmsg = "premature end of expression"; } } else if (stacklen > 1) { if (lasttt >= JIM_EXPROP_FUNC_FIRST) { errmsg = "too many arguments for math function"; } else { errmsg = "extra tokens at end of expression"; } } else { errmsg = "invalid ternary expression"; } Jim_SetResultFormatted(interp, "syntax error in expression \"%#s\": %s", exprObjPtr, errmsg); return JIM_ERR; } static int ExprAddLazyOperator(Jim_Interp *interp, ExprByteCode * expr, ParseToken *t) { int i; int leftindex, arity, offset; leftindex = expr->len - 1; arity = 1; while (arity) { ScriptToken *tt = &expr->token[leftindex]; if (tt->type >= JIM_TT_EXPR_OP) { arity += JimExprOperatorInfoByOpcode(tt->type)->arity; } arity--; if (--leftindex < 0) { return JIM_ERR; } } leftindex++; memmove(&expr->token[leftindex + 2], &expr->token[leftindex], sizeof(*expr->token) * (expr->len - leftindex)); expr->len += 2; offset = (expr->len - leftindex) - 1; expr->token[leftindex + 1].type = t->type + 1; expr->token[leftindex + 1].objPtr = interp->emptyObj; expr->token[leftindex].type = JIM_TT_EXPR_INT; expr->token[leftindex].objPtr = Jim_NewIntObj(interp, offset); expr->token[expr->len].objPtr = interp->emptyObj; expr->token[expr->len].type = t->type + 2; expr->len++; for (i = leftindex - 1; i > 0; i--) { const struct Jim_ExprOperator *op = JimExprOperatorInfoByOpcode(expr->token[i].type); if (op->lazy == LAZY_LEFT) { if (JimWideValue(expr->token[i - 1].objPtr) + i - 1 >= leftindex) { JimWideValue(expr->token[i - 1].objPtr) += 2; } } |
︙ | ︙ | |||
13573 13574 13575 13576 13577 13578 13579 | } else if (expr->token[right_index].type == JIM_EXPROP_COLON_LEFT && ternary_count == 1) { return right_index; } right_index--; } | | | 13642 13643 13644 13645 13646 13647 13648 13649 13650 13651 13652 13653 13654 13655 13656 | } else if (expr->token[right_index].type == JIM_EXPROP_COLON_LEFT && ternary_count == 1) { return right_index; } right_index--; } return -1; } static int ExprTernaryGetMoveIndices(ExprByteCode *expr, int right_index, int *prev_right_index, int *prev_left_index) { int i = right_index - 1; int ternary_count = 1; |
︙ | ︙ | |||
13615 13616 13617 13618 13619 13620 13621 | int j; ScriptToken tmp; if (expr->token[i].type != JIM_EXPROP_COLON_RIGHT) { continue; } | | | | | | | < < < < < < < < < < < | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | | | | | | | | | | | | | | | | | > | > > | | | | | | | | | | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | | 13684 13685 13686 13687 13688 13689 13690 13691 13692 13693 13694 13695 13696 13697 13698 13699 13700 13701 13702 13703 13704 13705 13706 13707 13708 13709 13710 13711 13712 13713 13714 13715 13716 13717 13718 13719 13720 13721 13722 13723 13724 13725 13726 13727 13728 13729 13730 13731 13732 13733 13734 13735 13736 13737 13738 13739 13740 13741 13742 13743 13744 13745 13746 13747 13748 13749 13750 13751 13752 13753 13754 13755 13756 13757 13758 13759 13760 13761 13762 13763 13764 13765 13766 13767 13768 13769 13770 13771 13772 13773 13774 13775 13776 13777 13778 13779 13780 13781 13782 13783 13784 13785 13786 13787 13788 13789 13790 13791 13792 13793 13794 13795 13796 13797 13798 13799 13800 13801 13802 13803 13804 13805 13806 13807 13808 13809 13810 13811 13812 13813 13814 13815 13816 13817 13818 13819 13820 13821 13822 13823 13824 13825 13826 13827 13828 13829 13830 13831 13832 13833 13834 13835 13836 13837 13838 13839 13840 13841 13842 13843 13844 13845 13846 13847 13848 13849 13850 13851 13852 13853 13854 13855 13856 13857 13858 13859 13860 13861 13862 13863 13864 13865 13866 13867 13868 13869 13870 13871 13872 13873 13874 13875 13876 13877 13878 13879 13880 13881 13882 13883 13884 13885 13886 13887 13888 13889 | int j; ScriptToken tmp; if (expr->token[i].type != JIM_EXPROP_COLON_RIGHT) { continue; } if (ExprTernaryGetMoveIndices(expr, i, &prev_right_index, &prev_left_index) == 0) { continue; } tmp = expr->token[prev_right_index]; for (j = prev_right_index; j < i; j++) { expr->token[j] = expr->token[j + 1]; } expr->token[i] = tmp; JimWideValue(expr->token[prev_left_index-1].objPtr) += (i - prev_right_index); i++; } } static ExprByteCode *ExprCreateByteCode(Jim_Interp *interp, const ParseTokenList *tokenlist, Jim_Obj *exprObjPtr, Jim_Obj *fileNameObj) { Jim_Stack stack; ExprByteCode *expr; int ok = 1; int i; int prevtt = JIM_TT_NONE; int have_ternary = 0; int count = tokenlist->count - 1; expr = Jim_Alloc(sizeof(*expr)); expr->inUse = 1; expr->len = 0; Jim_InitStack(&stack); for (i = 0; i < tokenlist->count; i++) { ParseToken *t = &tokenlist->list[i]; const struct Jim_ExprOperator *op = JimExprOperatorInfoByOpcode(t->type); if (op->lazy == LAZY_OP) { count += 2; if (t->type == JIM_EXPROP_TERNARY) { have_ternary = 1; } } } expr->token = Jim_Alloc(sizeof(ScriptToken) * count); for (i = 0; i < tokenlist->count && ok; i++) { ParseToken *t = &tokenlist->list[i]; struct ScriptToken *token = &expr->token[expr->len]; if (t->type == JIM_TT_EOL) { break; } if (TOKEN_IS_EXPR_OP(t->type)) { const struct Jim_ExprOperator *op; ParseToken *tt; if (prevtt == JIM_TT_NONE || prevtt == JIM_TT_SUBEXPR_START || prevtt == JIM_TT_SUBEXPR_COMMA || prevtt >= JIM_TT_EXPR_OP) { if (t->type == JIM_EXPROP_SUB) { t->type = JIM_EXPROP_UNARYMINUS; } else if (t->type == JIM_EXPROP_ADD) { t->type = JIM_EXPROP_UNARYPLUS; } } op = JimExprOperatorInfoByOpcode(t->type); while ((tt = Jim_StackPeek(&stack)) != NULL) { const struct Jim_ExprOperator *tt_op = JimExprOperatorInfoByOpcode(tt->type); if (op->arity != 1 && tt_op->precedence >= op->precedence) { if (tt_op->precedence == op->precedence && tt_op->lazy == RIGHT_ASSOC) { break; } if (ExprAddOperator(interp, expr, tt) != JIM_OK) { ok = 0; goto err; } Jim_StackPop(&stack); } else { break; } } Jim_StackPush(&stack, t); } else if (t->type == JIM_TT_SUBEXPR_START) { Jim_StackPush(&stack, t); } else if (t->type == JIM_TT_SUBEXPR_END || t->type == JIM_TT_SUBEXPR_COMMA) { ok = 0; while (Jim_StackLen(&stack)) { ParseToken *tt = Jim_StackPop(&stack); if (tt->type == JIM_TT_SUBEXPR_START || tt->type == JIM_TT_SUBEXPR_COMMA) { if (t->type == JIM_TT_SUBEXPR_COMMA) { Jim_StackPush(&stack, tt); } ok = 1; break; } if (ExprAddOperator(interp, expr, tt) != JIM_OK) { goto err; } } if (!ok) { Jim_SetResultFormatted(interp, "Unexpected close parenthesis in expression: \"%#s\"", exprObjPtr); goto err; } } else { Jim_Obj *objPtr = NULL; token->type = t->type; if (!TOKEN_IS_EXPR_START(prevtt) && !TOKEN_IS_EXPR_OP(prevtt)) { Jim_SetResultFormatted(interp, "missing operator in expression: \"%#s\"", exprObjPtr); ok = 0; goto err; } if (t->type == JIM_TT_EXPR_INT || t->type == JIM_TT_EXPR_DOUBLE) { char *endptr; if (t->type == JIM_TT_EXPR_INT) { objPtr = Jim_NewIntObj(interp, jim_strtoull(t->token, &endptr)); } else { objPtr = Jim_NewDoubleObj(interp, strtod(t->token, &endptr)); } if (endptr != t->token + t->len) { Jim_FreeNewObj(interp, objPtr); objPtr = NULL; } } if (objPtr) { token->objPtr = objPtr; } else { token->objPtr = Jim_NewStringObj(interp, t->token, t->len); if (t->type == JIM_TT_CMD) { JimSetSourceInfo(interp, token->objPtr, fileNameObj, t->line); } } expr->len++; } prevtt = t->type; } while (Jim_StackLen(&stack)) { ParseToken *tt = Jim_StackPop(&stack); if (tt->type == JIM_TT_SUBEXPR_START) { ok = 0; Jim_SetResultString(interp, "Missing close parenthesis", -1); goto err; } if (ExprAddOperator(interp, expr, tt) != JIM_OK) { ok = 0; goto err; } } if (have_ternary) { ExprTernaryReorderExpression(interp, expr); } err: Jim_FreeStack(&stack); for (i = 0; i < expr->len; i++) { Jim_IncrRefCount(expr->token[i].objPtr); } if (!ok) { |
︙ | ︙ | |||
13833 13834 13835 13836 13837 13838 13839 | struct JimParserCtx parser; struct ExprByteCode *expr; ParseTokenList tokenlist; int line; Jim_Obj *fileNameObj; int rc = JIM_ERR; | | | < | 13902 13903 13904 13905 13906 13907 13908 13909 13910 13911 13912 13913 13914 13915 13916 13917 13918 13919 13920 13921 13922 13923 13924 13925 13926 13927 13928 13929 13930 13931 13932 13933 13934 13935 | struct JimParserCtx parser; struct ExprByteCode *expr; ParseTokenList tokenlist; int line; Jim_Obj *fileNameObj; int rc = JIM_ERR; if (objPtr->typePtr == &sourceObjType) { fileNameObj = objPtr->internalRep.sourceValue.fileNameObj; line = objPtr->internalRep.sourceValue.lineNumber; } else { fileNameObj = interp->emptyObj; line = 1; } Jim_IncrRefCount(fileNameObj); exprText = Jim_GetString(objPtr, &exprTextLen); ScriptTokenListInit(&tokenlist); JimParserInit(&parser, exprText, exprTextLen, line); while (!parser.eof) { if (JimParseExpression(&parser) != JIM_OK) { ScriptTokenListFree(&tokenlist); Jim_SetResultFormatted(interp, "syntax error in expression: \"%#s\"", objPtr); expr = NULL; goto err; } ScriptAddToken(&tokenlist, parser.tstart, parser.tend - parser.tstart + 1, parser.tt, parser.tline); |
︙ | ︙ | |||
13880 13881 13882 13883 13884 13885 13886 | if (JimParseCheckMissing(interp, parser.missing.ch) == JIM_ERR) { ScriptTokenListFree(&tokenlist); Jim_DecrRefCount(interp, fileNameObj); return JIM_ERR; } | | | | | | > > | | | 13948 13949 13950 13951 13952 13953 13954 13955 13956 13957 13958 13959 13960 13961 13962 13963 13964 13965 13966 13967 13968 13969 13970 13971 13972 13973 13974 13975 13976 13977 13978 13979 13980 13981 13982 13983 13984 13985 13986 13987 13988 13989 13990 13991 13992 13993 13994 13995 13996 | if (JimParseCheckMissing(interp, parser.missing.ch) == JIM_ERR) { ScriptTokenListFree(&tokenlist); Jim_DecrRefCount(interp, fileNameObj); return JIM_ERR; } expr = ExprCreateByteCode(interp, &tokenlist, objPtr, fileNameObj); ScriptTokenListFree(&tokenlist); if (!expr) { goto err; } #ifdef DEBUG_SHOW_EXPR { int i; printf("==== Expr ====\n"); for (i = 0; i < expr->len; i++) { ScriptToken *t = &expr->token[i]; printf("[%2d] %s '%s'\n", i, jim_tt_name(t->type), Jim_String(t->objPtr)); } } #endif if (ExprCheckCorrectness(interp, objPtr, expr) != JIM_OK) { ExprFreeByteCode(interp, expr); expr = NULL; goto err; } rc = JIM_OK; err: Jim_DecrRefCount(interp, fileNameObj); Jim_FreeIntRep(interp, objPtr); Jim_SetIntRepPtr(objPtr, expr); objPtr->typePtr = &exprObjType; return rc; } |
︙ | ︙ | |||
13956 13957 13958 13959 13960 13961 13962 | Jim_Obj *staticStack[JIM_EE_STATICSTACK_LEN]; int i; int retcode = JIM_OK; struct JimExprState e; expr = JimGetExpression(interp, exprObjPtr); if (!expr) { | | | 14026 14027 14028 14029 14030 14031 14032 14033 14034 14035 14036 14037 14038 14039 14040 | Jim_Obj *staticStack[JIM_EE_STATICSTACK_LEN]; int i; int retcode = JIM_OK; struct JimExprState e; expr = JimGetExpression(interp, exprObjPtr); if (!expr) { return JIM_ERR; } #ifdef JIM_OPTIMIZATION { Jim_Obj *objPtr; |
︙ | ︙ | |||
14029 14030 14031 14032 14033 14034 14035 | } } noopt: #endif expr->inUse++; | | | > | 14099 14100 14101 14102 14103 14104 14105 14106 14107 14108 14109 14110 14111 14112 14113 14114 14115 14116 14117 14118 14119 14120 14121 14122 14123 14124 14125 14126 14127 14128 14129 | } } noopt: #endif expr->inUse++; if (expr->len > JIM_EE_STATICSTACK_LEN) e.stack = Jim_Alloc(sizeof(Jim_Obj *) * expr->len); else e.stack = staticStack; e.stacklen = 0; for (i = 0; i < expr->len && retcode == JIM_OK; i++) { Jim_Obj *objPtr; switch (expr->token[i].type) { case JIM_TT_EXPR_INT: case JIM_TT_EXPR_DOUBLE: case JIM_TT_EXPR_BOOLEAN: case JIM_TT_STR: ExprPush(&e, expr->token[i].objPtr); break; case JIM_TT_VAR: objPtr = Jim_GetVariable(interp, expr->token[i].objPtr, JIM_ERRMSG); if (objPtr) { |
︙ | ︙ | |||
14084 14085 14086 14087 14088 14089 14090 | retcode = Jim_EvalObj(interp, expr->token[i].objPtr); if (retcode == JIM_OK) { ExprPush(&e, Jim_GetResult(interp)); } break; default:{ | | | | 14155 14156 14157 14158 14159 14160 14161 14162 14163 14164 14165 14166 14167 14168 14169 14170 14171 14172 14173 14174 | retcode = Jim_EvalObj(interp, expr->token[i].objPtr); if (retcode == JIM_OK) { ExprPush(&e, Jim_GetResult(interp)); } break; default:{ e.skip = 0; e.opcode = expr->token[i].type; retcode = JimExprOperatorInfoByOpcode(e.opcode)->funcop(interp, &e); i += e.skip; continue; } } } expr->inUse--; |
︙ | ︙ | |||
14117 14118 14119 14120 14121 14122 14123 14124 14125 14126 14127 14128 14129 14130 14131 | } int Jim_GetBoolFromExpr(Jim_Interp *interp, Jim_Obj *exprObjPtr, int *boolPtr) { int retcode; jim_wide wideValue; double doubleValue; Jim_Obj *exprResultPtr; retcode = Jim_EvalExpression(interp, exprObjPtr, &exprResultPtr); if (retcode != JIM_OK) return retcode; if (JimGetWideNoErr(interp, exprResultPtr, &wideValue) != JIM_OK) { if (Jim_GetDouble(interp, exprResultPtr, &doubleValue) != JIM_OK) { | > > | | > > > > > | | | | | | | | | | | | | | | 14188 14189 14190 14191 14192 14193 14194 14195 14196 14197 14198 14199 14200 14201 14202 14203 14204 14205 14206 14207 14208 14209 14210 14211 14212 14213 14214 14215 14216 14217 14218 14219 14220 14221 14222 14223 14224 14225 14226 14227 14228 14229 14230 14231 14232 14233 14234 14235 14236 14237 14238 14239 14240 14241 14242 14243 14244 14245 14246 14247 14248 14249 14250 14251 14252 14253 14254 14255 | } int Jim_GetBoolFromExpr(Jim_Interp *interp, Jim_Obj *exprObjPtr, int *boolPtr) { int retcode; jim_wide wideValue; double doubleValue; int booleanValue; Jim_Obj *exprResultPtr; retcode = Jim_EvalExpression(interp, exprObjPtr, &exprResultPtr); if (retcode != JIM_OK) return retcode; if (JimGetWideNoErr(interp, exprResultPtr, &wideValue) != JIM_OK) { if (Jim_GetDouble(interp, exprResultPtr, &doubleValue) != JIM_OK) { if (Jim_GetBoolean(interp, exprResultPtr, &booleanValue) != JIM_OK) { Jim_DecrRefCount(interp, exprResultPtr); return JIM_ERR; } else { Jim_DecrRefCount(interp, exprResultPtr); *boolPtr = booleanValue; return JIM_OK; } } else { Jim_DecrRefCount(interp, exprResultPtr); *boolPtr = doubleValue != 0; return JIM_OK; } } *boolPtr = wideValue != 0; Jim_DecrRefCount(interp, exprResultPtr); return JIM_OK; } typedef struct ScanFmtPartDescr { char *arg; char *prefix; size_t width; int pos; char type; char modifier; } ScanFmtPartDescr; typedef struct ScanFmtStringObj { jim_wide size; char *stringRep; size_t count; size_t convCount; size_t maxPos; const char *error; char *scratch; ScanFmtPartDescr descr[1]; } ScanFmtStringObj; static void FreeScanFmtInternalRep(Jim_Interp *interp, Jim_Obj *objPtr); static void DupScanFmtInternalRep(Jim_Interp *interp, Jim_Obj *srcPtr, Jim_Obj *dupPtr); static void UpdateStringOfScanFmt(Jim_Obj *objPtr); |
︙ | ︙ | |||
14214 14215 14216 14217 14218 14219 14220 | int maxCount, i, approxSize, lastPos = -1; const char *fmt = objPtr->bytes; int maxFmtLen = objPtr->length; const char *fmtEnd = fmt + maxFmtLen; int curr; Jim_FreeIntRep(interp, objPtr); | | | | | | | | | | | | | | | | | | | | | | | | | | | | 14292 14293 14294 14295 14296 14297 14298 14299 14300 14301 14302 14303 14304 14305 14306 14307 14308 14309 14310 14311 14312 14313 14314 14315 14316 14317 14318 14319 14320 14321 14322 14323 14324 14325 14326 14327 14328 14329 14330 14331 14332 14333 14334 14335 14336 14337 14338 14339 14340 14341 14342 14343 14344 14345 14346 14347 14348 14349 14350 14351 14352 14353 14354 14355 14356 14357 14358 14359 14360 14361 14362 14363 14364 14365 14366 14367 14368 14369 14370 14371 14372 14373 14374 14375 14376 14377 14378 14379 14380 14381 14382 14383 14384 14385 14386 14387 14388 14389 14390 14391 14392 14393 14394 14395 14396 14397 14398 14399 14400 14401 14402 14403 14404 14405 14406 14407 14408 14409 14410 14411 14412 14413 14414 14415 14416 14417 14418 14419 14420 14421 14422 14423 14424 14425 14426 14427 14428 14429 14430 14431 14432 14433 14434 14435 14436 | int maxCount, i, approxSize, lastPos = -1; const char *fmt = objPtr->bytes; int maxFmtLen = objPtr->length; const char *fmtEnd = fmt + maxFmtLen; int curr; Jim_FreeIntRep(interp, objPtr); for (i = 0, maxCount = 0; i < maxFmtLen; ++i) if (fmt[i] == '%') ++maxCount; approxSize = sizeof(ScanFmtStringObj) +(maxCount + 1) * sizeof(ScanFmtPartDescr) +maxFmtLen * sizeof(char) + 3 + 1 + maxFmtLen * sizeof(char) + 1 + maxFmtLen * sizeof(char) +(maxCount + 1) * sizeof(char) +1; fmtObj = (ScanFmtStringObj *) Jim_Alloc(approxSize); memset(fmtObj, 0, approxSize); fmtObj->size = approxSize; fmtObj->maxPos = 0; fmtObj->scratch = (char *)&fmtObj->descr[maxCount + 1]; fmtObj->stringRep = fmtObj->scratch + maxFmtLen + 3 + 1; memcpy(fmtObj->stringRep, fmt, maxFmtLen); buffer = fmtObj->stringRep + maxFmtLen + 1; objPtr->internalRep.ptr = fmtObj; objPtr->typePtr = &scanFmtStringObjType; for (i = 0, curr = 0; fmt < fmtEnd; ++fmt) { int width = 0, skip; ScanFmtPartDescr *descr = &fmtObj->descr[curr]; fmtObj->count++; descr->width = 0; if (*fmt != '%' || fmt[1] == '%') { descr->type = 0; descr->prefix = &buffer[i]; for (; fmt < fmtEnd; ++fmt) { if (*fmt == '%') { if (fmt[1] != '%') break; ++fmt; } buffer[i++] = *fmt; } buffer[i++] = 0; } ++fmt; if (fmt >= fmtEnd) goto done; descr->pos = 0; if (*fmt == '*') { descr->pos = -1; ++fmt; } else fmtObj->convCount++; if (sscanf(fmt, "%d%n", &width, &skip) == 1) { fmt += skip; if (descr->pos != -1 && *fmt == '$') { int prev; ++fmt; descr->pos = width; width = 0; if ((lastPos == 0 && descr->pos > 0) || (lastPos > 0 && descr->pos == 0)) { fmtObj->error = "cannot mix \"%\" and \"%n$\" conversion specifiers"; return JIM_ERR; } for (prev = 0; prev < curr; ++prev) { if (fmtObj->descr[prev].pos == -1) continue; if (fmtObj->descr[prev].pos == descr->pos) { fmtObj->error = "variable is assigned by multiple \"%n$\" conversion specifiers"; return JIM_ERR; } } if (sscanf(fmt, "%d%n", &width, &skip) == 1) { descr->width = width; fmt += skip; } if (descr->pos > 0 && (size_t) descr->pos > fmtObj->maxPos) fmtObj->maxPos = descr->pos; } else { descr->width = width; } } if (lastPos == -1) lastPos = descr->pos; if (*fmt == '[') { int swapped = 1, beg = i, end, j; descr->type = '['; descr->arg = &buffer[i]; ++fmt; if (*fmt == '^') buffer[i++] = *fmt++; if (*fmt == ']') buffer[i++] = *fmt++; while (*fmt && *fmt != ']') buffer[i++] = *fmt++; if (*fmt != ']') { fmtObj->error = "unmatched [ in format string"; return JIM_ERR; } end = i; buffer[i++] = 0; while (swapped) { swapped = 0; for (j = beg + 1; j < end - 1; ++j) { if (buffer[j] == '-' && buffer[j - 1] > buffer[j + 1]) { char tmp = buffer[j - 1]; buffer[j - 1] = buffer[j + 1]; buffer[j + 1] = tmp; swapped = 1; } } } } else { if (strchr("hlL", *fmt) != 0) descr->modifier = tolower((int)*fmt++); descr->type = *fmt; if (strchr("efgcsndoxui", *fmt) == 0) { fmtObj->error = "bad scan conversion character"; return JIM_ERR; |
︙ | ︙ | |||
14387 14388 14389 14390 14391 14392 14393 | char *p = buffer; while (*str) { int c; int n; if (!sdescr && isspace(UCHAR(*str))) | | | 14465 14466 14467 14468 14469 14470 14471 14472 14473 14474 14475 14476 14477 14478 14479 | char *p = buffer; while (*str) { int c; int n; if (!sdescr && isspace(UCHAR(*str))) break; n = utf8_tounicode(str, &c); if (sdescr && !JimCharsetMatch(sdescr, c, JIM_CHARSET_SCAN)) break; while (n--) *p++ = *str++; } |
︙ | ︙ | |||
14410 14411 14412 14413 14414 14415 14416 | const char *tok; const ScanFmtPartDescr *descr = &fmtObj->descr[idx]; size_t scanned = 0; size_t anchor = pos; int i; Jim_Obj *tmpObj = NULL; | | | | | | | | | | | | | | | | | | | | | 14488 14489 14490 14491 14492 14493 14494 14495 14496 14497 14498 14499 14500 14501 14502 14503 14504 14505 14506 14507 14508 14509 14510 14511 14512 14513 14514 14515 14516 14517 14518 14519 14520 14521 14522 14523 14524 14525 14526 14527 14528 14529 14530 14531 14532 14533 14534 14535 14536 14537 14538 14539 14540 14541 14542 14543 14544 14545 14546 14547 14548 14549 14550 14551 14552 14553 14554 14555 14556 14557 14558 14559 14560 14561 14562 14563 14564 14565 14566 14567 14568 14569 14570 14571 14572 14573 14574 14575 14576 14577 14578 14579 14580 14581 14582 14583 14584 14585 14586 14587 14588 14589 14590 14591 14592 14593 14594 14595 14596 14597 14598 14599 14600 14601 14602 14603 | const char *tok; const ScanFmtPartDescr *descr = &fmtObj->descr[idx]; size_t scanned = 0; size_t anchor = pos; int i; Jim_Obj *tmpObj = NULL; *valObjPtr = 0; if (descr->prefix) { for (i = 0; pos < strLen && descr->prefix[i]; ++i) { if (isspace(UCHAR(descr->prefix[i]))) while (pos < strLen && isspace(UCHAR(str[pos]))) ++pos; else if (descr->prefix[i] != str[pos]) break; else ++pos; } if (pos >= strLen) { return -1; } else if (descr->prefix[i] != 0) return 0; } if (descr->type != 'c' && descr->type != '[' && descr->type != 'n') while (isspace(UCHAR(str[pos]))) ++pos; scanned = pos - anchor; if (descr->type == 'n') { *valObjPtr = Jim_NewIntObj(interp, anchor + scanned); } else if (pos >= strLen) { return -1; } else if (descr->type == 'c') { int c; scanned += utf8_tounicode(&str[pos], &c); *valObjPtr = Jim_NewIntObj(interp, c); return scanned; } else { if (descr->width > 0) { size_t sLen = utf8_strlen(&str[pos], strLen - pos); size_t tLen = descr->width > sLen ? sLen : descr->width; tmpObj = Jim_NewStringObjUtf8(interp, str + pos, tLen); tok = tmpObj->bytes; } else { tok = &str[pos]; } switch (descr->type) { case 'd': case 'o': case 'x': case 'u': case 'i':{ char *endp; jim_wide w; int base = descr->type == 'o' ? 8 : descr->type == 'x' ? 16 : descr->type == 'i' ? 0 : 10; if (base == 0) { w = jim_strtoull(tok, &endp); } else { w = strtoull(tok, &endp, base); } if (endp != tok) { *valObjPtr = Jim_NewIntObj(interp, w); scanned += endp - tok; } else { scanned = *tok ? 0 : -1; } break; } case 's': case '[':{ *valObjPtr = JimScanAString(interp, descr->arg, tok); scanned += Jim_Length(*valObjPtr); break; } case 'e': case 'f': case 'g':{ char *endp; double value = strtod(tok, &endp); if (endp != tok) { *valObjPtr = Jim_NewDoubleObj(interp, value); scanned += endp - tok; } else { scanned = *tok ? 0 : -1; } break; } |
︙ | ︙ | |||
14540 14541 14542 14543 14544 14545 14546 | int strLen = Jim_Utf8Length(interp, strObjPtr); Jim_Obj *resultList = 0; Jim_Obj **resultVec = 0; int resultc; Jim_Obj *emptyStr = 0; ScanFmtStringObj *fmtObj; | | | | | | | | | | | | | | | | 14618 14619 14620 14621 14622 14623 14624 14625 14626 14627 14628 14629 14630 14631 14632 14633 14634 14635 14636 14637 14638 14639 14640 14641 14642 14643 14644 14645 14646 14647 14648 14649 14650 14651 14652 14653 14654 14655 14656 14657 14658 14659 14660 14661 14662 14663 14664 14665 14666 14667 14668 14669 14670 14671 14672 14673 14674 14675 14676 14677 14678 14679 14680 14681 14682 14683 14684 14685 14686 | int strLen = Jim_Utf8Length(interp, strObjPtr); Jim_Obj *resultList = 0; Jim_Obj **resultVec = 0; int resultc; Jim_Obj *emptyStr = 0; ScanFmtStringObj *fmtObj; JimPanic((fmtObjPtr->typePtr != &scanFmtStringObjType, "Jim_ScanString() for non-scan format")); fmtObj = (ScanFmtStringObj *) fmtObjPtr->internalRep.ptr; if (fmtObj->error != 0) { if (flags & JIM_ERRMSG) Jim_SetResultString(interp, fmtObj->error, -1); return 0; } emptyStr = Jim_NewEmptyStringObj(interp); Jim_IncrRefCount(emptyStr); resultList = Jim_NewListObj(interp, NULL, 0); if (fmtObj->maxPos > 0) { for (i = 0; i < fmtObj->maxPos; ++i) Jim_ListAppendElement(interp, resultList, emptyStr); JimListGetElements(interp, resultList, &resultc, &resultVec); } for (i = 0, pos = 0; i < fmtObj->count; ++i) { ScanFmtPartDescr *descr = &(fmtObj->descr[i]); Jim_Obj *value = 0; if (descr->type == 0) continue; if (scanned > 0) scanned = ScanOneEntry(interp, str, pos, strLen, fmtObj, i, &value); if (scanned == -1 && i == 0) goto eof; pos += scanned; if (value == 0) value = Jim_NewEmptyStringObj(interp); if (descr->pos == -1) { Jim_FreeNewObj(interp, value); } else if (descr->pos == 0) Jim_ListAppendElement(interp, resultList, value); else if (resultVec[descr->pos - 1] == emptyStr) { Jim_DecrRefCount(interp, resultVec[descr->pos - 1]); Jim_IncrRefCount(value); resultVec[descr->pos - 1] = value; } else { Jim_FreeNewObj(interp, value); goto err; } } Jim_DecrRefCount(interp, emptyStr); return resultList; eof: |
︙ | ︙ | |||
14636 14637 14638 14639 14640 14641 14642 | static void JimRandomBytes(Jim_Interp *interp, void *dest, unsigned int len) { Jim_PrngState *prng; unsigned char *destByte = (unsigned char *)dest; unsigned int si, sj, x; | | | | | | | 14714 14715 14716 14717 14718 14719 14720 14721 14722 14723 14724 14725 14726 14727 14728 14729 14730 14731 14732 14733 14734 14735 14736 14737 14738 14739 14740 14741 14742 14743 14744 14745 14746 14747 14748 14749 14750 14751 14752 14753 14754 14755 14756 14757 14758 | static void JimRandomBytes(Jim_Interp *interp, void *dest, unsigned int len) { Jim_PrngState *prng; unsigned char *destByte = (unsigned char *)dest; unsigned int si, sj, x; if (interp->prngState == NULL) JimPrngInit(interp); prng = interp->prngState; for (x = 0; x < len; x++) { prng->i = (prng->i + 1) & 0xff; si = prng->sbox[prng->i]; prng->j = (prng->j + si) & 0xff; sj = prng->sbox[prng->j]; prng->sbox[prng->i] = sj; prng->sbox[prng->j] = si; *destByte++ = prng->sbox[(si + sj) & 0xff]; } } static void JimPrngSeed(Jim_Interp *interp, unsigned char *seed, int seedLen) { int i; Jim_PrngState *prng; if (interp->prngState == NULL) JimPrngInit(interp); prng = interp->prngState; for (i = 0; i < 256; i++) prng->sbox[i] = i; for (i = 0; i < seedLen; i++) { unsigned char t; t = prng->sbox[i & 0xFF]; prng->sbox[i & 0xFF] = prng->sbox[seed[i]]; prng->sbox[seed[i]] = t; } |
︙ | ︙ | |||
14697 14698 14699 14700 14701 14702 14703 | } if (argc == 3) { if (Jim_GetWide(interp, argv[2], &increment) != JIM_OK) return JIM_ERR; } intObjPtr = Jim_GetVariable(interp, argv[1], JIM_UNSHARED); if (!intObjPtr) { | | | | | | | | | 14775 14776 14777 14778 14779 14780 14781 14782 14783 14784 14785 14786 14787 14788 14789 14790 14791 14792 14793 14794 14795 14796 14797 14798 14799 14800 14801 14802 14803 14804 14805 14806 14807 14808 14809 14810 14811 14812 14813 14814 14815 14816 14817 14818 14819 14820 14821 14822 14823 14824 14825 14826 14827 14828 14829 14830 14831 14832 14833 14834 14835 | } if (argc == 3) { if (Jim_GetWide(interp, argv[2], &increment) != JIM_OK) return JIM_ERR; } intObjPtr = Jim_GetVariable(interp, argv[1], JIM_UNSHARED); if (!intObjPtr) { wideValue = 0; } else if (Jim_GetWide(interp, intObjPtr, &wideValue) != JIM_OK) { return JIM_ERR; } if (!intObjPtr || Jim_IsShared(intObjPtr)) { intObjPtr = Jim_NewIntObj(interp, wideValue + increment); if (Jim_SetVariable(interp, argv[1], intObjPtr) != JIM_OK) { Jim_FreeNewObj(interp, intObjPtr); return JIM_ERR; } } else { Jim_InvalidateStringRep(intObjPtr); JimWideValue(intObjPtr) = wideValue + increment; if (argv[1]->typePtr != &variableObjType) { Jim_SetVariable(interp, argv[1], intObjPtr); } } Jim_SetResult(interp, intObjPtr); return JIM_OK; } #define JIM_EVAL_SARGV_LEN 8 #define JIM_EVAL_SINTV_LEN 8 static int JimUnknown(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { int retcode; if (interp->unknown_called > 50) { return JIM_ERR; } if (Jim_GetCommand(interp, interp->unknown, JIM_NONE) == NULL) return JIM_ERR; interp->unknown_called++; retcode = Jim_EvalObjPrefix(interp, interp->unknown, argc, argv); interp->unknown_called--; return retcode; } static int JimInvokeCommand(Jim_Interp *interp, int objc, Jim_Obj *const *objv) |
︙ | ︙ | |||
14765 14766 14767 14768 14769 14770 14771 | for (j = 0; j < objc; j++) { printf(" '%s'", Jim_String(objv[j])); } printf("\n"); #endif if (interp->framePtr->tailcallCmd) { | | | | | | 14843 14844 14845 14846 14847 14848 14849 14850 14851 14852 14853 14854 14855 14856 14857 14858 14859 14860 14861 14862 14863 14864 14865 14866 14867 14868 14869 14870 14871 14872 14873 14874 14875 14876 14877 14878 14879 14880 14881 14882 14883 14884 14885 14886 14887 14888 14889 14890 14891 14892 14893 14894 14895 14896 14897 14898 14899 14900 14901 14902 14903 | for (j = 0; j < objc; j++) { printf(" '%s'", Jim_String(objv[j])); } printf("\n"); #endif if (interp->framePtr->tailcallCmd) { cmdPtr = interp->framePtr->tailcallCmd; interp->framePtr->tailcallCmd = NULL; } else { cmdPtr = Jim_GetCommand(interp, objv[0], JIM_ERRMSG); if (cmdPtr == NULL) { return JimUnknown(interp, objc, objv); } JimIncrCmdRefCount(cmdPtr); } if (interp->evalDepth == interp->maxEvalDepth) { Jim_SetResultString(interp, "Infinite eval recursion", -1); retcode = JIM_ERR; goto out; } interp->evalDepth++; Jim_SetEmptyResult(interp); if (cmdPtr->isproc) { retcode = JimCallProcedure(interp, cmdPtr, objc, objv); } else { interp->cmdPrivData = cmdPtr->u.native.privData; retcode = cmdPtr->u.native.cmdProc(interp, objc, objv); } interp->evalDepth--; out: JimDecrCmdRefCount(interp, cmdPtr); return retcode; } int Jim_EvalObjVector(Jim_Interp *interp, int objc, Jim_Obj *const *objv) { int i, retcode; for (i = 0; i < objc; i++) Jim_IncrRefCount(objv[i]); retcode = JimInvokeCommand(interp, objc, objv); for (i = 0; i < objc; i++) Jim_DecrRefCount(interp, objv[i]); return retcode; } int Jim_EvalObjPrefix(Jim_Interp *interp, Jim_Obj *prefix, int objc, Jim_Obj *const *objv) |
︙ | ︙ | |||
14833 14834 14835 14836 14837 14838 14839 | Jim_Free(nargv); return ret; } static void JimAddErrorToStack(Jim_Interp *interp, ScriptObj *script) { if (!interp->errorFlag) { | | | | | | 14911 14912 14913 14914 14915 14916 14917 14918 14919 14920 14921 14922 14923 14924 14925 14926 14927 14928 14929 14930 14931 14932 14933 14934 14935 14936 14937 14938 14939 | Jim_Free(nargv); return ret; } static void JimAddErrorToStack(Jim_Interp *interp, ScriptObj *script) { if (!interp->errorFlag) { interp->errorFlag = 1; Jim_IncrRefCount(script->fileNameObj); Jim_DecrRefCount(interp, interp->errorFileNameObj); interp->errorFileNameObj = script->fileNameObj; interp->errorLine = script->linenr; JimResetStackTrace(interp); interp->addStackTrace++; } if (interp->addStackTrace > 0) { JimAppendStackTrace(interp, Jim_String(interp->errorProc), script->fileNameObj, script->linenr); if (Jim_Length(script->fileNameObj)) { interp->addStackTrace = 0; } |
︙ | ︙ | |||
14886 14887 14888 14889 14890 14891 14892 | case JIM_TT_CMD: switch (Jim_EvalObj(interp, token->objPtr)) { case JIM_OK: case JIM_RETURN: objPtr = interp->result; break; case JIM_BREAK: | | | | 14964 14965 14966 14967 14968 14969 14970 14971 14972 14973 14974 14975 14976 14977 14978 14979 14980 14981 | case JIM_TT_CMD: switch (Jim_EvalObj(interp, token->objPtr)) { case JIM_OK: case JIM_RETURN: objPtr = interp->result; break; case JIM_BREAK: return JIM_BREAK; case JIM_CONTINUE: return JIM_CONTINUE; default: return JIM_ERR; } break; default: JimPanic((1, |
︙ | ︙ | |||
14928 14929 14930 14931 14932 14933 14934 | for (i = 0; i < tokens; i++) { switch (JimSubstOneToken(interp, &token[i], &intv[i])) { case JIM_OK: case JIM_RETURN: break; case JIM_BREAK: if (flags & JIM_SUBST_FLAG) { | | | | | | | | | | | 15006 15007 15008 15009 15010 15011 15012 15013 15014 15015 15016 15017 15018 15019 15020 15021 15022 15023 15024 15025 15026 15027 15028 15029 15030 15031 15032 15033 15034 15035 15036 15037 15038 15039 15040 15041 15042 15043 15044 15045 15046 15047 15048 15049 15050 15051 15052 15053 15054 15055 15056 15057 15058 15059 15060 15061 15062 15063 15064 15065 15066 15067 15068 15069 15070 15071 15072 15073 15074 15075 15076 15077 15078 15079 | for (i = 0; i < tokens; i++) { switch (JimSubstOneToken(interp, &token[i], &intv[i])) { case JIM_OK: case JIM_RETURN: break; case JIM_BREAK: if (flags & JIM_SUBST_FLAG) { tokens = i; continue; } case JIM_CONTINUE: if (flags & JIM_SUBST_FLAG) { intv[i] = NULL; continue; } default: while (i--) { Jim_DecrRefCount(interp, intv[i]); } if (intv != sintv) { Jim_Free(intv); } return NULL; } Jim_IncrRefCount(intv[i]); Jim_String(intv[i]); totlen += intv[i]->length; } if (tokens == 1 && intv[0] && intv == sintv) { Jim_DecrRefCount(interp, intv[0]); return intv[0]; } objPtr = Jim_NewStringObjNoAlloc(interp, NULL, 0); if (tokens == 4 && token[0].type == JIM_TT_ESC && token[1].type == JIM_TT_ESC && token[2].type == JIM_TT_VAR) { objPtr->typePtr = &interpolatedObjType; objPtr->internalRep.dictSubstValue.varNameObjPtr = token[0].objPtr; objPtr->internalRep.dictSubstValue.indexObjPtr = intv[2]; Jim_IncrRefCount(intv[2]); } else if (tokens && intv[0] && intv[0]->typePtr == &sourceObjType) { JimSetSourceInfo(interp, objPtr, intv[0]->internalRep.sourceValue.fileNameObj, intv[0]->internalRep.sourceValue.lineNumber); } s = objPtr->bytes = Jim_Alloc(totlen + 1); objPtr->length = totlen; for (i = 0; i < tokens; i++) { if (intv[i]) { memcpy(s, intv[i]->bytes, intv[i]->length); s += intv[i]->length; Jim_DecrRefCount(interp, intv[i]); } } objPtr->bytes[totlen] = '\0'; if (intv != sintv) { Jim_Free(intv); } return objPtr; } |
︙ | ︙ | |||
15031 15032 15033 15034 15035 15036 15037 | Jim_Obj *sargv[JIM_EVAL_SARGV_LEN], **argv = NULL; Jim_Obj *prevScriptObj; if (Jim_IsList(scriptObjPtr) && scriptObjPtr->bytes == NULL) { return JimEvalObjList(interp, scriptObjPtr); } | | | 15109 15110 15111 15112 15113 15114 15115 15116 15117 15118 15119 15120 15121 15122 15123 | Jim_Obj *sargv[JIM_EVAL_SARGV_LEN], **argv = NULL; Jim_Obj *prevScriptObj; if (Jim_IsList(scriptObjPtr) && scriptObjPtr->bytes == NULL) { return JimEvalObjList(interp, scriptObjPtr); } Jim_IncrRefCount(scriptObjPtr); script = JimGetScript(interp, scriptObjPtr); if (!JimScriptValid(interp, script)) { Jim_DecrRefCount(interp, scriptObjPtr); return JIM_ERR; } Jim_SetEmptyResult(interp); |
︙ | ︙ | |||
15067 15068 15069 15070 15071 15072 15073 | return JIM_OK; } } #endif script->inUse++; | | | | | | 15145 15146 15147 15148 15149 15150 15151 15152 15153 15154 15155 15156 15157 15158 15159 15160 15161 15162 15163 15164 15165 15166 15167 15168 15169 15170 15171 15172 15173 15174 15175 15176 15177 15178 | return JIM_OK; } } #endif script->inUse++; prevScriptObj = interp->currentScriptObj; interp->currentScriptObj = scriptObjPtr; interp->errorFlag = 0; argv = sargv; for (i = 0; i < script->len && retcode == JIM_OK; ) { int argc; int j; argc = token[i].objPtr->internalRep.scriptLineValue.argc; script->linenr = token[i].objPtr->internalRep.scriptLineValue.line; if (argc > JIM_EVAL_SARGV_LEN) argv = Jim_Alloc(sizeof(Jim_Obj *) * argc); i++; for (j = 0; j < argc; j++) { long wordtokens = 1; int expand = 0; Jim_Obj *wordObjPtr = NULL; |
︙ | ︙ | |||
15146 15147 15148 15149 15150 15151 15152 | Jim_IncrRefCount(wordObjPtr); i += wordtokens; if (!expand) { argv[j] = wordObjPtr; } else { | | | | | | | | | | | | | | | | | | | 15224 15225 15226 15227 15228 15229 15230 15231 15232 15233 15234 15235 15236 15237 15238 15239 15240 15241 15242 15243 15244 15245 15246 15247 15248 15249 15250 15251 15252 15253 15254 15255 15256 15257 15258 15259 15260 15261 15262 15263 15264 15265 15266 15267 15268 15269 15270 15271 15272 15273 15274 15275 15276 15277 15278 15279 15280 15281 15282 15283 15284 15285 15286 15287 15288 15289 15290 15291 15292 15293 15294 15295 15296 15297 15298 15299 15300 15301 15302 15303 15304 15305 15306 15307 15308 15309 15310 15311 15312 15313 15314 15315 15316 15317 15318 15319 15320 15321 15322 15323 15324 15325 15326 15327 15328 15329 15330 15331 15332 15333 15334 15335 15336 15337 15338 15339 15340 15341 15342 15343 15344 15345 15346 15347 15348 15349 15350 15351 15352 15353 15354 15355 15356 15357 | Jim_IncrRefCount(wordObjPtr); i += wordtokens; if (!expand) { argv[j] = wordObjPtr; } else { int len = Jim_ListLength(interp, wordObjPtr); int newargc = argc + len - 1; int k; if (len > 1) { if (argv == sargv) { if (newargc > JIM_EVAL_SARGV_LEN) { argv = Jim_Alloc(sizeof(*argv) * newargc); memcpy(argv, sargv, sizeof(*argv) * j); } } else { argv = Jim_Realloc(argv, sizeof(*argv) * newargc); } } for (k = 0; k < len; k++) { argv[j++] = wordObjPtr->internalRep.listValue.ele[k]; Jim_IncrRefCount(wordObjPtr->internalRep.listValue.ele[k]); } Jim_DecrRefCount(interp, wordObjPtr); j--; argc += len - 1; } } if (retcode == JIM_OK && argc) { retcode = JimInvokeCommand(interp, argc, argv); if (Jim_CheckSignal(interp)) { retcode = JIM_SIGNAL; } } while (j-- > 0) { Jim_DecrRefCount(interp, argv[j]); } if (argv != sargv) { Jim_Free(argv); argv = sargv; } } if (retcode == JIM_ERR) { JimAddErrorToStack(interp, script); } else if (retcode != JIM_RETURN || interp->returnCode != JIM_ERR) { interp->addStackTrace = 0; } interp->currentScriptObj = prevScriptObj; Jim_FreeIntRep(interp, scriptObjPtr); scriptObjPtr->typePtr = &scriptObjType; Jim_SetIntRepPtr(scriptObjPtr, script); Jim_DecrRefCount(interp, scriptObjPtr); return retcode; } static int JimSetProcArg(Jim_Interp *interp, Jim_Obj *argNameObj, Jim_Obj *argValObj) { int retcode; const char *varname = Jim_String(argNameObj); if (*varname == '&') { Jim_Obj *objPtr; Jim_CallFrame *savedCallFrame = interp->framePtr; interp->framePtr = interp->framePtr->parent; objPtr = Jim_GetVariable(interp, argValObj, JIM_ERRMSG); interp->framePtr = savedCallFrame; if (!objPtr) { return JIM_ERR; } objPtr = Jim_NewStringObj(interp, varname + 1, -1); Jim_IncrRefCount(objPtr); retcode = Jim_SetVariableLink(interp, objPtr, argValObj, interp->framePtr->parent); Jim_DecrRefCount(interp, objPtr); } else { retcode = Jim_SetVariable(interp, argNameObj, argValObj); } return retcode; } static void JimSetProcWrongArgs(Jim_Interp *interp, Jim_Obj *procNameObj, Jim_Cmd *cmd) { Jim_Obj *argmsg = Jim_NewStringObj(interp, "", 0); int i; for (i = 0; i < cmd->u.proc.argListLen; i++) { Jim_AppendString(interp, argmsg, " ", 1); if (i == cmd->u.proc.argsPos) { if (cmd->u.proc.arglist[i].defaultObjPtr) { Jim_AppendString(interp, argmsg, "?", 1); Jim_AppendObj(interp, argmsg, cmd->u.proc.arglist[i].defaultObjPtr); Jim_AppendString(interp, argmsg, " ...?", -1); } else { Jim_AppendString(interp, argmsg, "?arg...?", -1); } } else { if (cmd->u.proc.arglist[i].defaultObjPtr) { Jim_AppendString(interp, argmsg, "?", 1); Jim_AppendObj(interp, argmsg, cmd->u.proc.arglist[i].nameObjPtr); |
︙ | ︙ | |||
15294 15295 15296 15297 15298 15299 15300 | #ifdef jim_ext_namespace int Jim_EvalNamespace(Jim_Interp *interp, Jim_Obj *scriptObj, Jim_Obj *nsObj) { Jim_CallFrame *callFramePtr; int retcode; | | | | | | | | | | | | | | | | | | < | | < | | | | | | | | | | | | | | | | | < | < | 15372 15373 15374 15375 15376 15377 15378 15379 15380 15381 15382 15383 15384 15385 15386 15387 15388 15389 15390 15391 15392 15393 15394 15395 15396 15397 15398 15399 15400 15401 15402 15403 15404 15405 15406 15407 15408 15409 15410 15411 15412 15413 15414 15415 15416 15417 15418 15419 15420 15421 15422 15423 15424 15425 15426 15427 15428 15429 15430 15431 15432 15433 15434 15435 15436 15437 15438 15439 15440 15441 15442 15443 15444 15445 15446 15447 15448 15449 15450 15451 15452 15453 15454 15455 15456 15457 15458 15459 15460 15461 15462 15463 15464 15465 15466 15467 15468 15469 15470 15471 15472 15473 15474 15475 15476 15477 15478 15479 15480 15481 15482 15483 15484 15485 15486 15487 15488 15489 15490 15491 15492 15493 15494 15495 15496 15497 15498 15499 15500 15501 15502 15503 15504 15505 15506 15507 15508 15509 15510 15511 15512 15513 15514 15515 15516 15517 15518 15519 15520 15521 15522 15523 15524 15525 15526 15527 15528 15529 15530 15531 | #ifdef jim_ext_namespace int Jim_EvalNamespace(Jim_Interp *interp, Jim_Obj *scriptObj, Jim_Obj *nsObj) { Jim_CallFrame *callFramePtr; int retcode; callFramePtr = JimCreateCallFrame(interp, interp->framePtr, nsObj); callFramePtr->argv = &interp->emptyObj; callFramePtr->argc = 0; callFramePtr->procArgsObjPtr = NULL; callFramePtr->procBodyObjPtr = scriptObj; callFramePtr->staticVars = NULL; callFramePtr->fileNameObj = interp->emptyObj; callFramePtr->line = 0; Jim_IncrRefCount(scriptObj); interp->framePtr = callFramePtr; if (interp->framePtr->level == interp->maxCallFrameDepth) { Jim_SetResultString(interp, "Too many nested calls. Infinite recursion?", -1); retcode = JIM_ERR; } else { retcode = Jim_EvalObj(interp, scriptObj); } interp->framePtr = interp->framePtr->parent; JimFreeCallFrame(interp, callFramePtr, JIM_FCF_REUSE); return retcode; } #endif static int JimCallProcedure(Jim_Interp *interp, Jim_Cmd *cmd, int argc, Jim_Obj *const *argv) { Jim_CallFrame *callFramePtr; int i, d, retcode, optargs; ScriptObj *script; if (argc - 1 < cmd->u.proc.reqArity || (cmd->u.proc.argsPos < 0 && argc - 1 > cmd->u.proc.reqArity + cmd->u.proc.optArity)) { JimSetProcWrongArgs(interp, argv[0], cmd); return JIM_ERR; } if (Jim_Length(cmd->u.proc.bodyObjPtr) == 0) { return JIM_OK; } if (interp->framePtr->level == interp->maxCallFrameDepth) { Jim_SetResultString(interp, "Too many nested calls. Infinite recursion?", -1); return JIM_ERR; } callFramePtr = JimCreateCallFrame(interp, interp->framePtr, cmd->u.proc.nsObj); callFramePtr->argv = argv; callFramePtr->argc = argc; callFramePtr->procArgsObjPtr = cmd->u.proc.argListObjPtr; callFramePtr->procBodyObjPtr = cmd->u.proc.bodyObjPtr; callFramePtr->staticVars = cmd->u.proc.staticVars; script = JimGetScript(interp, interp->currentScriptObj); callFramePtr->fileNameObj = script->fileNameObj; callFramePtr->line = script->linenr; Jim_IncrRefCount(cmd->u.proc.argListObjPtr); Jim_IncrRefCount(cmd->u.proc.bodyObjPtr); interp->framePtr = callFramePtr; optargs = (argc - 1 - cmd->u.proc.reqArity); i = 1; for (d = 0; d < cmd->u.proc.argListLen; d++) { Jim_Obj *nameObjPtr = cmd->u.proc.arglist[d].nameObjPtr; if (d == cmd->u.proc.argsPos) { Jim_Obj *listObjPtr; int argsLen = 0; if (cmd->u.proc.reqArity + cmd->u.proc.optArity < argc - 1) { argsLen = argc - 1 - (cmd->u.proc.reqArity + cmd->u.proc.optArity); } listObjPtr = Jim_NewListObj(interp, &argv[i], argsLen); if (cmd->u.proc.arglist[d].defaultObjPtr) { nameObjPtr =cmd->u.proc.arglist[d].defaultObjPtr; } retcode = Jim_SetVariable(interp, nameObjPtr, listObjPtr); if (retcode != JIM_OK) { goto badargset; } i += argsLen; continue; } if (cmd->u.proc.arglist[d].defaultObjPtr == NULL || optargs-- > 0) { retcode = JimSetProcArg(interp, nameObjPtr, argv[i++]); } else { retcode = Jim_SetVariable(interp, nameObjPtr, cmd->u.proc.arglist[d].defaultObjPtr); } if (retcode != JIM_OK) { goto badargset; } } retcode = Jim_EvalObj(interp, cmd->u.proc.bodyObjPtr); badargset: interp->framePtr = interp->framePtr->parent; JimFreeCallFrame(interp, callFramePtr, JIM_FCF_REUSE); if (interp->framePtr->tailcallObj) { do { Jim_Obj *tailcallObj = interp->framePtr->tailcallObj; interp->framePtr->tailcallObj = NULL; if (retcode == JIM_EVAL) { retcode = Jim_EvalObjList(interp, tailcallObj); if (retcode == JIM_RETURN) { interp->returnLevel++; } } Jim_DecrRefCount(interp, tailcallObj); } while (interp->framePtr->tailcallObj); if (interp->framePtr->tailcallCmd) { JimDecrCmdRefCount(interp, interp->framePtr->tailcallCmd); interp->framePtr->tailcallCmd = NULL; } } if (retcode == JIM_RETURN) { if (--interp->returnLevel <= 0) { retcode = interp->returnCode; interp->returnCode = JIM_OK; interp->returnLevel = 0; } } |
︙ | ︙ | |||
15559 15560 15561 15562 15563 15564 15565 | Jim_IncrRefCount(scriptObjPtr); prevScriptObj = interp->currentScriptObj; interp->currentScriptObj = scriptObjPtr; retcode = Jim_EvalObj(interp, scriptObjPtr); | | | | 15633 15634 15635 15636 15637 15638 15639 15640 15641 15642 15643 15644 15645 15646 15647 15648 15649 15650 15651 15652 15653 15654 15655 15656 | Jim_IncrRefCount(scriptObjPtr); prevScriptObj = interp->currentScriptObj; interp->currentScriptObj = scriptObjPtr; retcode = Jim_EvalObj(interp, scriptObjPtr); if (retcode == JIM_RETURN) { if (--interp->returnLevel <= 0) { retcode = interp->returnCode; interp->returnCode = JIM_OK; interp->returnLevel = 0; } } if (retcode == JIM_ERR) { interp->addStackTrace++; } interp->currentScriptObj = prevScriptObj; Jim_DecrRefCount(interp, scriptObjPtr); |
︙ | ︙ | |||
15598 15599 15600 15601 15602 15603 15604 | JimParseCmd(pc); return; } if (*pc->p == '$' && !(flags & JIM_SUBST_NOVAR)) { if (JimParseVar(pc) == JIM_OK) { return; } | | | 15672 15673 15674 15675 15676 15677 15678 15679 15680 15681 15682 15683 15684 15685 15686 | JimParseCmd(pc); return; } if (*pc->p == '$' && !(flags & JIM_SUBST_NOVAR)) { if (JimParseVar(pc) == JIM_OK) { return; } pc->tstart = pc->p; flags |= JIM_SUBST_NOVAR; } while (pc->len) { if (*pc->p == '$' && !(flags & JIM_SUBST_NOVAR)) { break; } |
︙ | ︙ | |||
15629 15630 15631 15632 15633 15634 15635 | { int scriptTextLen; const char *scriptText = Jim_GetString(objPtr, &scriptTextLen); struct JimParserCtx parser; struct ScriptObj *script = Jim_Alloc(sizeof(*script)); ParseTokenList tokenlist; | | | | | | | > > > > | | 15703 15704 15705 15706 15707 15708 15709 15710 15711 15712 15713 15714 15715 15716 15717 15718 15719 15720 15721 15722 15723 15724 15725 15726 15727 15728 15729 15730 15731 15732 15733 15734 15735 15736 15737 15738 15739 15740 15741 15742 15743 15744 15745 15746 15747 15748 15749 15750 15751 15752 15753 15754 15755 15756 15757 15758 15759 15760 15761 15762 15763 15764 15765 15766 15767 15768 15769 15770 15771 15772 15773 15774 15775 15776 15777 15778 15779 15780 15781 15782 15783 15784 15785 15786 15787 15788 15789 15790 15791 | { int scriptTextLen; const char *scriptText = Jim_GetString(objPtr, &scriptTextLen); struct JimParserCtx parser; struct ScriptObj *script = Jim_Alloc(sizeof(*script)); ParseTokenList tokenlist; ScriptTokenListInit(&tokenlist); JimParserInit(&parser, scriptText, scriptTextLen, 1); while (1) { JimParseSubst(&parser, flags); if (parser.eof) { break; } ScriptAddToken(&tokenlist, parser.tstart, parser.tend - parser.tstart + 1, parser.tt, parser.tline); } script->inUse = 1; script->substFlags = flags; script->fileNameObj = interp->emptyObj; Jim_IncrRefCount(script->fileNameObj); SubstObjAddTokens(interp, script, &tokenlist); ScriptTokenListFree(&tokenlist); #ifdef DEBUG_SHOW_SUBST { int i; printf("==== Subst ====\n"); for (i = 0; i < script->len; i++) { printf("[%2d] %s '%s'\n", i, jim_tt_name(script->token[i].type), Jim_String(script->token[i].objPtr)); } } #endif Jim_FreeIntRep(interp, objPtr); Jim_SetIntRepPtr(objPtr, script); objPtr->typePtr = &scriptObjType; return JIM_OK; } static ScriptObj *Jim_GetSubst(Jim_Interp *interp, Jim_Obj *objPtr, int flags) { if (objPtr->typePtr != &scriptObjType || ((ScriptObj *)Jim_GetIntRepPtr(objPtr))->substFlags != flags) SetSubstFromAny(interp, objPtr, flags); return (ScriptObj *) Jim_GetIntRepPtr(objPtr); } int Jim_SubstObj(Jim_Interp *interp, Jim_Obj *substObjPtr, Jim_Obj **resObjPtrPtr, int flags) { ScriptObj *script = Jim_GetSubst(interp, substObjPtr, flags); Jim_IncrRefCount(substObjPtr); script->inUse++; *resObjPtrPtr = JimInterpolateTokens(interp, script->token, script->len, flags); script->inUse--; Jim_DecrRefCount(interp, substObjPtr); if (*resObjPtrPtr == NULL) { return JIM_ERR; } return JIM_OK; } void Jim_WrongNumArgs(Jim_Interp *interp, int argc, Jim_Obj *const *argv, const char *msg) { Jim_Obj *objPtr; Jim_Obj *listObjPtr; JimPanic((argc == 0, "Jim_WrongNumArgs() called with argc=0")); listObjPtr = Jim_NewListObj(interp, argv, argc); if (*msg) { Jim_ListAppendElement(interp, listObjPtr, Jim_NewStringObj(interp, msg, -1)); } Jim_IncrRefCount(listObjPtr); objPtr = Jim_ListJoin(interp, listObjPtr, " ", 1); Jim_DecrRefCount(interp, listObjPtr); |
︙ | ︙ | |||
15724 15725 15726 15727 15728 15729 15730 | static Jim_Obj *JimHashtablePatternMatch(Jim_Interp *interp, Jim_HashTable *ht, Jim_Obj *patternObjPtr, JimHashtableIteratorCallbackType *callback, int type) { Jim_HashEntry *he; Jim_Obj *listObjPtr = Jim_NewListObj(interp, NULL, 0); | | | 15802 15803 15804 15805 15806 15807 15808 15809 15810 15811 15812 15813 15814 15815 15816 | static Jim_Obj *JimHashtablePatternMatch(Jim_Interp *interp, Jim_HashTable *ht, Jim_Obj *patternObjPtr, JimHashtableIteratorCallbackType *callback, int type) { Jim_HashEntry *he; Jim_Obj *listObjPtr = Jim_NewListObj(interp, NULL, 0); if (patternObjPtr && JimTrivialMatch(Jim_String(patternObjPtr))) { he = Jim_FindHashEntry(ht, Jim_String(patternObjPtr)); if (he) { callback(interp, listObjPtr, he, type); } } else { |
︙ | ︙ | |||
15755 15756 15757 15758 15759 15760 15761 | static void JimCommandMatch(Jim_Interp *interp, Jim_Obj *listObjPtr, Jim_HashEntry *he, int type) { Jim_Cmd *cmdPtr = Jim_GetHashEntryVal(he); Jim_Obj *objPtr; if (type == JIM_CMDLIST_PROCS && !cmdPtr->isproc) { | | | 15833 15834 15835 15836 15837 15838 15839 15840 15841 15842 15843 15844 15845 15846 15847 | static void JimCommandMatch(Jim_Interp *interp, Jim_Obj *listObjPtr, Jim_HashEntry *he, int type) { Jim_Cmd *cmdPtr = Jim_GetHashEntryVal(he); Jim_Obj *objPtr; if (type == JIM_CMDLIST_PROCS && !cmdPtr->isproc) { return; } objPtr = Jim_NewStringObj(interp, he->key, -1); Jim_IncrRefCount(objPtr); if (type != JIM_CMDLIST_CHANNELS || Jim_AioFilehandle(interp, objPtr)) { |
︙ | ︙ | |||
15815 15816 15817 15818 15819 15820 15821 | { Jim_CallFrame *targetCallFrame; targetCallFrame = JimGetCallFrameByInteger(interp, levelObjPtr); if (targetCallFrame == NULL) { return JIM_ERR; } | | | 15893 15894 15895 15896 15897 15898 15899 15900 15901 15902 15903 15904 15905 15906 15907 | { Jim_CallFrame *targetCallFrame; targetCallFrame = JimGetCallFrameByInteger(interp, levelObjPtr); if (targetCallFrame == NULL) { return JIM_ERR; } if (targetCallFrame == interp->topFramePtr) { Jim_SetResultFormatted(interp, "bad level \"%#s\"", levelObjPtr); return JIM_ERR; } if (info_level_cmd) { *objPtrPtr = Jim_NewListObj(interp, targetCallFrame->argv, targetCallFrame->argc); } |
︙ | ︙ | |||
16002 16003 16004 16005 16006 16007 16008 | objPtr = Jim_GetVariable(interp, argv[1], JIM_ERRMSG); if (!objPtr) return JIM_ERR; Jim_SetResult(interp, objPtr); return JIM_OK; } | | | 16080 16081 16082 16083 16084 16085 16086 16087 16088 16089 16090 16091 16092 16093 16094 | objPtr = Jim_GetVariable(interp, argv[1], JIM_ERRMSG); if (!objPtr) return JIM_ERR; Jim_SetResult(interp, objPtr); return JIM_OK; } if (Jim_SetVariable(interp, argv[1], argv[2]) != JIM_OK) return JIM_ERR; Jim_SetResult(interp, argv[2]); return JIM_OK; } static int Jim_UnsetCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) |
︙ | ︙ | |||
16045 16046 16047 16048 16049 16050 16051 | static int Jim_WhileCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { if (argc != 3) { Jim_WrongNumArgs(interp, 1, argv, "condition body"); return JIM_ERR; } | | | 16123 16124 16125 16126 16127 16128 16129 16130 16131 16132 16133 16134 16135 16136 16137 | static int Jim_WhileCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { if (argc != 3) { Jim_WrongNumArgs(interp, 1, argv, "condition body"); return JIM_ERR; } while (1) { int boolean, retval; if ((retval = Jim_GetBoolFromExpr(interp, argv[1], &boolean)) != JIM_OK) return retval; if (!boolean) break; |
︙ | ︙ | |||
16085 16086 16087 16088 16089 16090 16091 | Jim_Obj *stopVarNamePtr = NULL; if (argc != 5) { Jim_WrongNumArgs(interp, 1, argv, "start test next body"); return JIM_ERR; } | | | | | | | | | | | | | | | | | 16163 16164 16165 16166 16167 16168 16169 16170 16171 16172 16173 16174 16175 16176 16177 16178 16179 16180 16181 16182 16183 16184 16185 16186 16187 16188 16189 16190 16191 16192 16193 16194 16195 16196 16197 16198 16199 16200 16201 16202 16203 16204 16205 16206 16207 16208 16209 16210 16211 16212 16213 16214 16215 16216 16217 16218 16219 16220 16221 16222 16223 16224 16225 16226 16227 16228 16229 16230 16231 16232 16233 16234 16235 16236 16237 16238 16239 16240 16241 16242 16243 16244 16245 16246 16247 16248 16249 16250 16251 16252 16253 16254 16255 16256 16257 16258 16259 16260 16261 16262 16263 16264 16265 16266 16267 16268 16269 16270 16271 16272 16273 16274 | Jim_Obj *stopVarNamePtr = NULL; if (argc != 5) { Jim_WrongNumArgs(interp, 1, argv, "start test next body"); return JIM_ERR; } if ((retval = Jim_EvalObj(interp, argv[1])) != JIM_OK) { return retval; } retval = Jim_GetBoolFromExpr(interp, argv[2], &boolean); #ifdef JIM_OPTIMIZATION if (retval == JIM_OK && boolean) { ScriptObj *incrScript; ExprByteCode *expr; jim_wide stop, currentVal; Jim_Obj *objPtr; int cmpOffset; expr = JimGetExpression(interp, argv[2]); incrScript = JimGetScript(interp, argv[3]); if (incrScript == NULL || incrScript->len != 3 || !expr || expr->len != 3) { goto evalstart; } if (incrScript->token[1].type != JIM_TT_ESC || expr->token[0].type != JIM_TT_VAR || (expr->token[1].type != JIM_TT_EXPR_INT && expr->token[1].type != JIM_TT_VAR)) { goto evalstart; } if (expr->token[2].type == JIM_EXPROP_LT) { cmpOffset = 0; } else if (expr->token[2].type == JIM_EXPROP_LTE) { cmpOffset = 1; } else { goto evalstart; } if (!Jim_CompareStringImmediate(interp, incrScript->token[1].objPtr, "incr")) { goto evalstart; } if (!Jim_StringEqObj(incrScript->token[2].objPtr, expr->token[0].objPtr)) { goto evalstart; } if (expr->token[1].type == JIM_TT_EXPR_INT) { if (Jim_GetWide(interp, expr->token[1].objPtr, &stop) == JIM_ERR) { goto evalstart; } } else { stopVarNamePtr = expr->token[1].objPtr; Jim_IncrRefCount(stopVarNamePtr); stop = 0; } varNamePtr = expr->token[0].objPtr; Jim_IncrRefCount(varNamePtr); objPtr = Jim_GetVariable(interp, varNamePtr, JIM_NONE); if (objPtr == NULL || Jim_GetWide(interp, objPtr, ¤tVal) != JIM_OK) { goto testcond; } while (retval == JIM_OK) { if (stopVarNamePtr) { objPtr = Jim_GetVariable(interp, stopVarNamePtr, JIM_NONE); if (objPtr == NULL || Jim_GetWide(interp, objPtr, &stop) != JIM_OK) { goto testcond; } } if (currentVal >= stop + cmpOffset) { break; } retval = Jim_EvalObj(interp, argv[4]); if (retval == JIM_OK || retval == JIM_CONTINUE) { retval = JIM_OK; objPtr = Jim_GetVariable(interp, varNamePtr, JIM_ERRMSG); if (objPtr == NULL) { retval = JIM_ERR; goto out; } if (!Jim_IsShared(objPtr) && objPtr->typePtr == &intObjType) { currentVal = ++JimWideValue(objPtr); Jim_InvalidateStringRep(objPtr); |
︙ | ︙ | |||
16206 16207 16208 16209 16210 16211 16212 | } goto out; } evalstart: #endif while (boolean && (retval == JIM_OK || retval == JIM_CONTINUE)) { | | | | | | | | 16284 16285 16286 16287 16288 16289 16290 16291 16292 16293 16294 16295 16296 16297 16298 16299 16300 16301 16302 16303 16304 16305 16306 16307 16308 16309 16310 16311 16312 | } goto out; } evalstart: #endif while (boolean && (retval == JIM_OK || retval == JIM_CONTINUE)) { retval = Jim_EvalObj(interp, argv[4]); if (retval == JIM_OK || retval == JIM_CONTINUE) { JIM_IF_OPTIM(evalnext:) retval = Jim_EvalObj(interp, argv[3]); if (retval == JIM_OK || retval == JIM_CONTINUE) { JIM_IF_OPTIM(testcond:) retval = Jim_GetBoolFromExpr(interp, argv[2], &boolean); } } } JIM_IF_OPTIM(out:) if (stopVarNamePtr) { Jim_DecrRefCount(interp, stopVarNamePtr); } if (varNamePtr) { Jim_DecrRefCount(interp, varNamePtr); } |
︙ | ︙ | |||
16266 16267 16268 16269 16270 16271 16272 | while (((i < limit && incr > 0) || (i > limit && incr < 0)) && retval == JIM_OK) { retval = Jim_EvalObj(interp, bodyObjPtr); if (retval == JIM_OK || retval == JIM_CONTINUE) { Jim_Obj *objPtr = Jim_GetVariable(interp, argv[1], JIM_ERRMSG); retval = JIM_OK; | | | 16344 16345 16346 16347 16348 16349 16350 16351 16352 16353 16354 16355 16356 16357 16358 | while (((i < limit && incr > 0) || (i > limit && incr < 0)) && retval == JIM_OK) { retval = Jim_EvalObj(interp, bodyObjPtr); if (retval == JIM_OK || retval == JIM_CONTINUE) { Jim_Obj *objPtr = Jim_GetVariable(interp, argv[1], JIM_ERRMSG); retval = JIM_OK; i += incr; if (objPtr && !Jim_IsShared(objPtr) && objPtr->typePtr == &intObjType) { if (argv[1]->typePtr != &variableObjType) { if (Jim_SetVariable(interp, argv[1], objPtr) != JIM_OK) { return JIM_ERR; } |
︙ | ︙ | |||
16331 16332 16333 16334 16335 16336 16337 | } static int JimForeachMapHelper(Jim_Interp *interp, int argc, Jim_Obj *const *argv, int doMap) { int result = JIM_OK; int i, numargs; | | | | | 16409 16410 16411 16412 16413 16414 16415 16416 16417 16418 16419 16420 16421 16422 16423 16424 16425 16426 16427 16428 16429 16430 16431 16432 16433 | } static int JimForeachMapHelper(Jim_Interp *interp, int argc, Jim_Obj *const *argv, int doMap) { int result = JIM_OK; int i, numargs; Jim_ListIter twoiters[2]; Jim_ListIter *iters; Jim_Obj *script; Jim_Obj *resultObj; if (argc < 4 || argc % 2 != 0) { Jim_WrongNumArgs(interp, 1, argv, "varList list ?varList list ...? script"); return JIM_ERR; } script = argv[argc - 1]; numargs = (argc - 1 - 1); if (numargs == 2) { iters = twoiters; } else { iters = Jim_Alloc(numargs * sizeof(*iters)); } |
︙ | ︙ | |||
16369 16370 16371 16372 16373 16374 16375 | } else { resultObj = interp->emptyObj; } Jim_IncrRefCount(resultObj); while (1) { | | | | | | | | 16447 16448 16449 16450 16451 16452 16453 16454 16455 16456 16457 16458 16459 16460 16461 16462 16463 16464 16465 16466 16467 16468 16469 16470 16471 16472 16473 16474 16475 16476 16477 16478 16479 16480 16481 16482 16483 16484 | } else { resultObj = interp->emptyObj; } Jim_IncrRefCount(resultObj); while (1) { for (i = 0; i < numargs; i += 2) { if (!JimListIterDone(interp, &iters[i + 1])) { break; } } if (i == numargs) { break; } for (i = 0; i < numargs; i += 2) { Jim_Obj *varName; JimListIterInit(&iters[i], argv[i + 1]); while ((varName = JimListIterNext(interp, &iters[i])) != NULL) { Jim_Obj *valObj = JimListIterNext(interp, &iters[i + 1]); if (!valObj) { valObj = interp->emptyObj; } Jim_IncrRefCount(valObj); result = Jim_SetVariable(interp, varName, valObj); Jim_DecrRefCount(interp, valObj); if (result != JIM_OK) { goto err; } } |
︙ | ︙ | |||
16478 16479 16480 16481 16482 16483 16484 | static int Jim_IfCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { int boolean, retval, current = 1, falsebody = 0; if (argc >= 3) { while (1) { | | | | | | | | 16556 16557 16558 16559 16560 16561 16562 16563 16564 16565 16566 16567 16568 16569 16570 16571 16572 16573 16574 16575 16576 16577 16578 16579 16580 16581 16582 16583 16584 16585 16586 16587 16588 16589 16590 16591 16592 16593 16594 16595 16596 16597 16598 16599 16600 | static int Jim_IfCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { int boolean, retval, current = 1, falsebody = 0; if (argc >= 3) { while (1) { if (current >= argc) goto err; if ((retval = Jim_GetBoolFromExpr(interp, argv[current++], &boolean)) != JIM_OK) return retval; if (current >= argc) goto err; if (Jim_CompareStringImmediate(interp, argv[current], "then")) current++; if (current >= argc) goto err; if (boolean) return Jim_EvalObj(interp, argv[current]); if (++current >= argc) { Jim_SetResult(interp, Jim_NewEmptyStringObj(interp)); return JIM_OK; } falsebody = current++; if (Jim_CompareStringImmediate(interp, argv[falsebody], "else")) { if (current != argc - 1) goto err; return Jim_EvalObj(interp, argv[current]); } else if (Jim_CompareStringImmediate(interp, argv[falsebody], "elseif")) continue; else if (falsebody != argc - 1) goto err; return Jim_EvalObj(interp, argv[falsebody]); } return JIM_OK; } err: |
︙ | ︙ | |||
16620 16621 16622 16623 16624 16625 16626 | break; case SWITCH_GLOB: if (Jim_StringMatchObj(interp, patObj, strObj, 0)) script = caseList[i + 1]; break; case SWITCH_RE: command = Jim_NewStringObj(interp, "regexp", -1); | | | | 16698 16699 16700 16701 16702 16703 16704 16705 16706 16707 16708 16709 16710 16711 16712 16713 16714 16715 16716 16717 16718 16719 16720 16721 16722 | break; case SWITCH_GLOB: if (Jim_StringMatchObj(interp, patObj, strObj, 0)) script = caseList[i + 1]; break; case SWITCH_RE: command = Jim_NewStringObj(interp, "regexp", -1); case SWITCH_CMD:{ int rc = Jim_CommandMatchObj(interp, command, patObj, strObj, 0); if (argc - opt == 1) { Jim_Obj **vector; JimListGetElements(interp, argv[opt], &patCount, &vector); caseList = vector; } if (rc < 0) { return -rc; } if (rc) script = caseList[i + 1]; break; } |
︙ | ︙ | |||
16768 16769 16770 16771 16772 16773 16774 | opt_all = 1; break; case OPT_COMMAND: if (i >= argc - 2) { goto wrongargs; } commandObj = argv[++i]; | | | 16846 16847 16848 16849 16850 16851 16852 16853 16854 16855 16856 16857 16858 16859 16860 | opt_all = 1; break; case OPT_COMMAND: if (i >= argc - 2) { goto wrongargs; } commandObj = argv[++i]; case OPT_EXACT: case OPT_GLOB: case OPT_REGEXP: opt_match = option; break; } } |
︙ | ︙ | |||
16816 16817 16818 16819 16820 16821 16822 | } rc = JIM_ERR; goto done; } break; } | | | | 16894 16895 16896 16897 16898 16899 16900 16901 16902 16903 16904 16905 16906 16907 16908 16909 16910 16911 16912 16913 16914 | } rc = JIM_ERR; goto done; } break; } if (!eq && opt_bool && opt_not && !opt_all) { continue; } if ((!opt_bool && eq == !opt_not) || (opt_bool && (eq || opt_all))) { Jim_Obj *resultObj; if (opt_bool) { resultObj = Jim_NewIntObj(interp, eq ^ opt_not); } else if (!opt_inline) { resultObj = Jim_NewIntObj(interp, i); |
︙ | ︙ | |||
16849 16850 16851 16852 16853 16854 16855 | } } if (opt_all) { Jim_SetResult(interp, listObjPtr); } else { | | > | | | < < | < | < > > | | 16927 16928 16929 16930 16931 16932 16933 16934 16935 16936 16937 16938 16939 16940 16941 16942 16943 16944 16945 16946 16947 16948 16949 16950 16951 16952 16953 16954 16955 16956 16957 16958 16959 16960 16961 16962 16963 16964 16965 16966 16967 16968 16969 16970 16971 16972 16973 16974 16975 16976 16977 16978 16979 16980 16981 | } } if (opt_all) { Jim_SetResult(interp, listObjPtr); } else { if (opt_bool) { Jim_SetResultBool(interp, opt_not); } else if (!opt_inline) { Jim_SetResultInt(interp, -1); } } done: if (commandObj) { Jim_DecrRefCount(interp, commandObj); } return rc; } static int Jim_LappendCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { Jim_Obj *listObjPtr; int new_obj = 0; int i; if (argc < 2) { Jim_WrongNumArgs(interp, 1, argv, "varName ?value value ...?"); return JIM_ERR; } listObjPtr = Jim_GetVariable(interp, argv[1], JIM_UNSHARED); if (!listObjPtr) { listObjPtr = Jim_NewListObj(interp, NULL, 0); new_obj = 1; } else if (Jim_IsShared(listObjPtr)) { listObjPtr = Jim_DuplicateObj(interp, listObjPtr); new_obj = 1; } for (i = 2; i < argc; i++) Jim_ListAppendElement(interp, listObjPtr, argv[i]); if (Jim_SetVariable(interp, argv[1], listObjPtr) != JIM_OK) { if (new_obj) Jim_FreeNewObj(interp, listObjPtr); return JIM_ERR; } Jim_SetResult(interp, listObjPtr); return JIM_OK; } |
︙ | ︙ | |||
16952 16953 16954 16955 16956 16957 16958 | len = Jim_ListLength(interp, listObj); first = JimRelToAbsIndex(len, first); last = JimRelToAbsIndex(len, last); JimRelToAbsRange(len, &first, &last, &rangeLen); | | | | | | | | | 17029 17030 17031 17032 17033 17034 17035 17036 17037 17038 17039 17040 17041 17042 17043 17044 17045 17046 17047 17048 17049 17050 17051 17052 17053 17054 17055 17056 17057 17058 17059 17060 17061 17062 17063 17064 17065 17066 17067 17068 17069 17070 17071 17072 17073 17074 17075 17076 17077 17078 | len = Jim_ListLength(interp, listObj); first = JimRelToAbsIndex(len, first); last = JimRelToAbsIndex(len, last); JimRelToAbsRange(len, &first, &last, &rangeLen); if (first < len) { } else if (len == 0) { first = 0; } else { Jim_SetResultString(interp, "list doesn't contain element ", -1); Jim_AppendObj(interp, Jim_GetResult(interp), argv[2]); return JIM_ERR; } newListObj = Jim_NewListObj(interp, listObj->internalRep.listValue.ele, first); ListInsertElements(newListObj, -1, argc - 4, argv + 4); ListInsertElements(newListObj, -1, len - first - rangeLen, listObj->internalRep.listValue.ele + first + rangeLen); Jim_SetResult(interp, newListObj); return JIM_OK; } static int Jim_LsetCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { if (argc < 3) { Jim_WrongNumArgs(interp, 1, argv, "listVar ?index...? newVal"); return JIM_ERR; } else if (argc == 3) { if (Jim_SetVariable(interp, argv[1], argv[2]) != JIM_OK) return JIM_ERR; Jim_SetResult(interp, argv[2]); return JIM_OK; } return Jim_ListSetIndex(interp, argv[1], argv + 2, argc - 3, argv[argc - 1]); } |
︙ | ︙ | |||
17099 17100 17101 17102 17103 17104 17105 | } if (argc == 2) { stringObjPtr = Jim_GetVariable(interp, argv[1], JIM_ERRMSG); if (!stringObjPtr) return JIM_ERR; } else { | | | | | | | 17176 17177 17178 17179 17180 17181 17182 17183 17184 17185 17186 17187 17188 17189 17190 17191 17192 17193 17194 17195 17196 17197 17198 17199 17200 17201 17202 17203 17204 17205 | } if (argc == 2) { stringObjPtr = Jim_GetVariable(interp, argv[1], JIM_ERRMSG); if (!stringObjPtr) return JIM_ERR; } else { int new_obj = 0; stringObjPtr = Jim_GetVariable(interp, argv[1], JIM_UNSHARED); if (!stringObjPtr) { stringObjPtr = Jim_NewEmptyStringObj(interp); new_obj = 1; } else if (Jim_IsShared(stringObjPtr)) { new_obj = 1; stringObjPtr = Jim_DuplicateObj(interp, stringObjPtr); } for (i = 2; i < argc; i++) { Jim_AppendObj(interp, stringObjPtr, argv[i]); } if (Jim_SetVariable(interp, argv[1], stringObjPtr) != JIM_OK) { if (new_obj) { Jim_FreeNewObj(interp, stringObjPtr); } return JIM_ERR; } } Jim_SetResult(interp, stringObjPtr); return JIM_OK; |
︙ | ︙ | |||
17151 17152 17153 17154 17155 17156 17157 | rc = Jim_EvalObj(interp, argv[1]); } else { rc = Jim_EvalObj(interp, Jim_ConcatObj(interp, argc - 1, argv + 1)); } if (rc == JIM_ERR) { | | < | | | < < < < | 17228 17229 17230 17231 17232 17233 17234 17235 17236 17237 17238 17239 17240 17241 17242 17243 17244 17245 17246 17247 17248 17249 17250 17251 17252 17253 17254 17255 17256 17257 17258 17259 17260 17261 17262 17263 17264 17265 17266 17267 17268 17269 17270 17271 17272 17273 17274 17275 17276 17277 17278 17279 17280 17281 17282 17283 | rc = Jim_EvalObj(interp, argv[1]); } else { rc = Jim_EvalObj(interp, Jim_ConcatObj(interp, argc - 1, argv + 1)); } if (rc == JIM_ERR) { interp->addStackTrace++; } return rc; } static int Jim_UplevelCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { if (argc >= 2) { int retcode; Jim_CallFrame *savedCallFrame, *targetCallFrame; const char *str; savedCallFrame = interp->framePtr; str = Jim_String(argv[1]); if ((str[0] >= '0' && str[0] <= '9') || str[0] == '#') { targetCallFrame = Jim_GetCallFrameByLevel(interp, argv[1]); argc--; argv++; } else { targetCallFrame = Jim_GetCallFrameByLevel(interp, NULL); } if (targetCallFrame == NULL) { return JIM_ERR; } if (argc < 2) { Jim_WrongNumArgs(interp, 1, argv - 1, "?level? command ?arg ...?"); return JIM_ERR; } interp->framePtr = targetCallFrame; if (argc == 2) { retcode = Jim_EvalObj(interp, argv[1]); } else { retcode = Jim_EvalObj(interp, Jim_ConcatObj(interp, argc - 1, argv + 1)); } interp->framePtr = savedCallFrame; return retcode; } else { Jim_WrongNumArgs(interp, 1, argv, "?level? command ?arg ...?"); return JIM_ERR; } |
︙ | ︙ | |||
17292 17293 17294 17295 17296 17297 17298 | } if (i != argc - 1 && i != argc) { Jim_WrongNumArgs(interp, 1, argv, "?-code code? ?-errorinfo stacktrace? ?-level level? ?result?"); } | | | | | | | | | 17364 17365 17366 17367 17368 17369 17370 17371 17372 17373 17374 17375 17376 17377 17378 17379 17380 17381 17382 17383 17384 17385 17386 17387 17388 17389 17390 17391 17392 17393 17394 17395 17396 17397 17398 17399 17400 17401 17402 17403 17404 17405 17406 17407 17408 17409 17410 17411 17412 17413 17414 17415 17416 17417 17418 17419 17420 17421 17422 17423 17424 17425 17426 17427 17428 17429 17430 17431 17432 17433 17434 | } if (i != argc - 1 && i != argc) { Jim_WrongNumArgs(interp, 1, argv, "?-code code? ?-errorinfo stacktrace? ?-level level? ?result?"); } if (stackTraceObj && returnCode == JIM_ERR) { JimSetStackTrace(interp, stackTraceObj); } if (errorCodeObj && returnCode == JIM_ERR) { Jim_SetGlobalVariableStr(interp, "errorCode", errorCodeObj); } interp->returnCode = returnCode; interp->returnLevel = level; if (i == argc - 1) { Jim_SetResult(interp, argv[i]); } return JIM_RETURN; } static int Jim_TailcallCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { if (interp->framePtr->level == 0) { Jim_SetResultString(interp, "tailcall can only be called from a proc or lambda", -1); return JIM_ERR; } else if (argc >= 2) { Jim_CallFrame *cf = interp->framePtr->parent; Jim_Cmd *cmdPtr = Jim_GetCommand(interp, argv[1], JIM_ERRMSG); if (cmdPtr == NULL) { return JIM_ERR; } JimPanic((cf->tailcallCmd != NULL, "Already have a tailcallCmd")); JimIncrCmdRefCount(cmdPtr); cf->tailcallCmd = cmdPtr; JimPanic((cf->tailcallObj != NULL, "Already have a tailcallobj")); cf->tailcallObj = Jim_NewListObj(interp, argv + 1, argc - 1); Jim_IncrRefCount(cf->tailcallObj); return JIM_EVAL; } return JIM_OK; } static int JimAliasCmd(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { Jim_Obj *cmdList; Jim_Obj *prefixListObj = Jim_CmdPrivData(interp); cmdList = Jim_DuplicateObj(interp, prefixListObj); Jim_ListInsertElements(interp, cmdList, Jim_ListLength(interp, cmdList), argc - 1, argv + 1); return JimEvalObjList(interp, cmdList); } static void JimAliasCmdDelete(Jim_Interp *interp, void *privData) |
︙ | ︙ | |||
17406 17407 17408 17409 17410 17411 17412 | cmd = JimCreateProcedureCmd(interp, argv[2], NULL, argv[3], NULL); } else { cmd = JimCreateProcedureCmd(interp, argv[2], argv[3], argv[4], NULL); } if (cmd) { | | | | | | | 17478 17479 17480 17481 17482 17483 17484 17485 17486 17487 17488 17489 17490 17491 17492 17493 17494 17495 17496 17497 17498 17499 17500 17501 17502 17503 17504 17505 17506 17507 17508 17509 17510 17511 17512 17513 17514 17515 17516 17517 17518 17519 17520 17521 17522 17523 17524 17525 17526 | cmd = JimCreateProcedureCmd(interp, argv[2], NULL, argv[3], NULL); } else { cmd = JimCreateProcedureCmd(interp, argv[2], argv[3], argv[4], NULL); } if (cmd) { Jim_Obj *qualifiedCmdNameObj; const char *cmdname = JimQualifyName(interp, Jim_String(argv[1]), &qualifiedCmdNameObj); JimCreateCommand(interp, cmdname, cmd); JimUpdateProcNamespace(interp, cmd, cmdname); JimFreeQualifiedName(interp, qualifiedCmdNameObj); Jim_SetResult(interp, argv[1]); return JIM_OK; } return JIM_ERR; } static int Jim_LocalCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { int retcode; if (argc < 2) { Jim_WrongNumArgs(interp, 1, argv, "cmd ?args ...?"); return JIM_ERR; } interp->local++; retcode = Jim_EvalObjVector(interp, argc - 1, argv + 1); interp->local--; if (retcode == 0) { Jim_Obj *cmdNameObj = Jim_GetResult(interp); if (Jim_GetCommand(interp, cmdNameObj, JIM_ERRMSG) == NULL) { return JIM_ERR; } if (interp->framePtr->localCommands == NULL) { |
︙ | ︙ | |||
17473 17474 17475 17476 17477 17478 17479 | int retcode; Jim_Cmd *cmdPtr = Jim_GetCommand(interp, argv[1], JIM_ERRMSG); if (cmdPtr == NULL || !cmdPtr->isproc || !cmdPtr->prevCmd) { Jim_SetResultFormatted(interp, "no previous command: \"%#s\"", argv[1]); return JIM_ERR; } | | | | | 17545 17546 17547 17548 17549 17550 17551 17552 17553 17554 17555 17556 17557 17558 17559 17560 17561 17562 17563 17564 17565 17566 | int retcode; Jim_Cmd *cmdPtr = Jim_GetCommand(interp, argv[1], JIM_ERRMSG); if (cmdPtr == NULL || !cmdPtr->isproc || !cmdPtr->prevCmd) { Jim_SetResultFormatted(interp, "no previous command: \"%#s\"", argv[1]); return JIM_ERR; } cmdPtr->u.proc.upcall++; JimIncrCmdRefCount(cmdPtr); retcode = Jim_EvalObjVector(interp, argc - 1, argv + 1); cmdPtr->u.proc.upcall--; JimDecrCmdRefCount(interp, cmdPtr); return retcode; } } |
︙ | ︙ | |||
17511 17512 17513 17514 17515 17516 17517 | if (len != 2 && len != 3) { Jim_SetResultFormatted(interp, "can't interpret \"%#s\" as a lambda expression", argv[1]); return JIM_ERR; } if (len == 3) { #ifdef jim_ext_namespace | | | | 17583 17584 17585 17586 17587 17588 17589 17590 17591 17592 17593 17594 17595 17596 17597 17598 17599 17600 17601 17602 17603 17604 17605 17606 17607 17608 17609 17610 | if (len != 2 && len != 3) { Jim_SetResultFormatted(interp, "can't interpret \"%#s\" as a lambda expression", argv[1]); return JIM_ERR; } if (len == 3) { #ifdef jim_ext_namespace nsObj = JimQualifyNameObj(interp, Jim_ListGetIndex(interp, argv[1], 2)); #else Jim_SetResultString(interp, "namespaces not enabled", -1); return JIM_ERR; #endif } argListObjPtr = Jim_ListGetIndex(interp, argv[1], 0); bodyObjPtr = Jim_ListGetIndex(interp, argv[1], 1); cmd = JimCreateProcedureCmd(interp, argListObjPtr, NULL, bodyObjPtr, nsObj); if (cmd) { nargv = Jim_Alloc((argc - 2 + 1) * sizeof(*nargv)); nargv[0] = Jim_NewStringObj(interp, "apply lambdaExpr", -1); Jim_IncrRefCount(nargv[0]); memcpy(&nargv[1], argv + 2, (argc - 2) * sizeof(*nargv)); ret = JimCallProcedure(interp, cmd, argc - 2 + 1, nargv); Jim_DecrRefCount(interp, nargv[0]); Jim_Free(nargv); |
︙ | ︙ | |||
17554 17555 17556 17557 17558 17559 17560 | static int Jim_UpvarCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { int i; Jim_CallFrame *targetCallFrame; | | | | | | | | 17626 17627 17628 17629 17630 17631 17632 17633 17634 17635 17636 17637 17638 17639 17640 17641 17642 17643 17644 17645 17646 17647 17648 17649 17650 17651 17652 17653 17654 17655 17656 17657 17658 17659 17660 17661 17662 17663 17664 17665 17666 17667 17668 17669 17670 17671 17672 17673 17674 17675 17676 17677 17678 17679 17680 | static int Jim_UpvarCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { int i; Jim_CallFrame *targetCallFrame; if (argc > 3 && (argc % 2 == 0)) { targetCallFrame = Jim_GetCallFrameByLevel(interp, argv[1]); argc--; argv++; } else { targetCallFrame = Jim_GetCallFrameByLevel(interp, NULL); } if (targetCallFrame == NULL) { return JIM_ERR; } if (argc < 3) { Jim_WrongNumArgs(interp, 1, argv, "?level? otherVar localVar ?otherVar localVar ...?"); return JIM_ERR; } for (i = 1; i < argc; i += 2) { if (Jim_SetVariableLink(interp, argv[i + 1], argv[i], targetCallFrame) != JIM_OK) return JIM_ERR; } return JIM_OK; } static int Jim_GlobalCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { int i; if (argc < 2) { Jim_WrongNumArgs(interp, 1, argv, "varName ?varName ...?"); return JIM_ERR; } if (interp->framePtr->level == 0) return JIM_OK; for (i = 1; i < argc; i++) { const char *name = Jim_String(argv[i]); if (name[0] != ':' || name[1] != ':') { if (Jim_SetVariableLink(interp, argv[i], argv[i], interp->topFramePtr) != JIM_OK) return JIM_ERR; } } return JIM_OK; |
︙ | ︙ | |||
17621 17622 17623 17624 17625 17626 17627 | Jim_SetResultString(interp, "list must contain an even number of elements", -1); return NULL; } str = Jim_String(objPtr); strLen = Jim_Utf8Length(interp, objPtr); | | | | | | | | 17693 17694 17695 17696 17697 17698 17699 17700 17701 17702 17703 17704 17705 17706 17707 17708 17709 17710 17711 17712 17713 17714 17715 17716 17717 17718 17719 17720 17721 17722 17723 17724 17725 17726 17727 17728 17729 17730 17731 17732 17733 17734 | Jim_SetResultString(interp, "list must contain an even number of elements", -1); return NULL; } str = Jim_String(objPtr); strLen = Jim_Utf8Length(interp, objPtr); resultObjPtr = Jim_NewStringObj(interp, "", 0); while (strLen) { for (i = 0; i < numMaps; i += 2) { Jim_Obj *eachObjPtr; const char *k; int kl; eachObjPtr = Jim_ListGetIndex(interp, mapListObjPtr, i); k = Jim_String(eachObjPtr); kl = Jim_Utf8Length(interp, eachObjPtr); if (strLen >= kl && kl) { int rc; rc = JimStringCompareLen(str, k, kl, nocase); if (rc == 0) { if (noMatchStart) { Jim_AppendString(interp, resultObjPtr, noMatchStart, str - noMatchStart); noMatchStart = NULL; } Jim_AppendObj(interp, resultObjPtr, Jim_ListGetIndex(interp, mapListObjPtr, i + 1)); str += utf8_index(str, kl); strLen -= kl; break; } } } if (i == numMaps) { int c; if (noMatchStart == NULL) noMatchStart = str; str += utf8_tounicode(str, &c); strLen--; } } |
︙ | ︙ | |||
17713 17714 17715 17716 17717 17718 17719 | } Jim_SetResultInt(interp, len); return JIM_OK; case OPT_CAT:{ Jim_Obj *objPtr; if (argc == 3) { | | | | | | | 17785 17786 17787 17788 17789 17790 17791 17792 17793 17794 17795 17796 17797 17798 17799 17800 17801 17802 17803 17804 17805 17806 17807 17808 17809 17810 17811 17812 17813 17814 17815 17816 17817 17818 17819 17820 17821 17822 17823 17824 17825 17826 17827 17828 17829 17830 17831 17832 17833 17834 17835 17836 17837 17838 17839 17840 17841 17842 17843 17844 17845 17846 17847 17848 17849 17850 17851 | } Jim_SetResultInt(interp, len); return JIM_OK; case OPT_CAT:{ Jim_Obj *objPtr; if (argc == 3) { objPtr = argv[2]; } else { int i; objPtr = Jim_NewStringObj(interp, "", 0); for (i = 2; i < argc; i++) { Jim_AppendObj(interp, objPtr, argv[i]); } } Jim_SetResult(interp, objPtr); return JIM_OK; } case OPT_COMPARE: case OPT_EQUAL: { long opt_length = -1; int n = argc - 4; int i = 2; while (n > 0) { int subopt; if (Jim_GetEnum(interp, argv[i++], nocase_length_options, &subopt, NULL, JIM_ENUM_ABBREV) != JIM_OK) { badcompareargs: Jim_WrongNumArgs(interp, 2, argv, "?-nocase? ?-length int? string1 string2"); return JIM_ERR; } if (subopt == 0) { opt_case = 0; n--; } else { if (n < 2) { goto badcompareargs; } if (Jim_GetLong(interp, argv[i++], &opt_length) != JIM_OK) { return JIM_ERR; } n -= 2; } } if (n) { goto badcompareargs; } argv += argc - 2; if (opt_length < 0 && option != OPT_COMPARE && opt_case) { Jim_SetResultBool(interp, Jim_StringEqObj(argv[0], argv[1])); } else { if (opt_length >= 0) { n = JimStringCompareLen(Jim_String(argv[0]), Jim_String(argv[1]), opt_length, !opt_case); } else { |
︙ | ︙ | |||
17879 17880 17881 17882 17883 17884 17885 | Jim_SetResult(interp, objPtr); return JIM_OK; } case OPT_REVERSE:{ char *buf, *p; const char *str; | < | 17951 17952 17953 17954 17955 17956 17957 17958 17959 17960 17961 17962 17963 17964 | Jim_SetResult(interp, objPtr); return JIM_OK; } case OPT_REVERSE:{ char *buf, *p; const char *str; int i; if (argc != 3) { Jim_WrongNumArgs(interp, 2, argv, "string"); return JIM_ERR; } |
︙ | ︙ | |||
17923 17924 17925 17926 17927 17928 17929 | if (idx != INT_MIN && idx != INT_MAX) { idx = JimRelToAbsIndex(len, idx); } if (idx < 0 || idx >= len || str == NULL) { Jim_SetResultString(interp, "", 0); } else if (len == Jim_Length(argv[2])) { | | | 17994 17995 17996 17997 17998 17999 18000 18001 18002 18003 18004 18005 18006 18007 18008 | if (idx != INT_MIN && idx != INT_MAX) { idx = JimRelToAbsIndex(len, idx); } if (idx < 0 || idx >= len || str == NULL) { Jim_SetResultString(interp, "", 0); } else if (len == Jim_Length(argv[2])) { Jim_SetResultString(interp, str + idx, 1); } else { int c; int i = utf8_index(str, idx); Jim_SetResultString(interp, str + i, utf8_tounicode(str + i, &c)); } |
︙ | ︙ | |||
18077 18078 18079 18080 18081 18082 18083 | static int Jim_CatchCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { int exitCode = 0; int i; int sig = 0; | | | | 18148 18149 18150 18151 18152 18153 18154 18155 18156 18157 18158 18159 18160 18161 18162 18163 18164 18165 18166 18167 18168 18169 18170 18171 18172 18173 | static int Jim_CatchCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { int exitCode = 0; int i; int sig = 0; jim_wide ignore_mask = (1 << JIM_EXIT) | (1 << JIM_EVAL) | (1 << JIM_SIGNAL); static const int max_ignore_code = sizeof(ignore_mask) * 8; Jim_SetGlobalVariableStr(interp, "errorCode", Jim_NewStringObj(interp, "NONE", -1)); for (i = 1; i < argc - 1; i++) { const char *arg = Jim_String(argv[i]); jim_wide option; int ignore; if (strcmp(arg, "--") == 0) { i++; break; } if (*arg != '-') { break; } |
︙ | ︙ | |||
18117 18118 18119 18120 18121 18122 18123 | option = Jim_FindByName(arg, jimReturnCodes, jimReturnCodesSize); } if (option < 0) { goto wrongargs; } if (ignore) { | | | | | | | | | 18188 18189 18190 18191 18192 18193 18194 18195 18196 18197 18198 18199 18200 18201 18202 18203 18204 18205 18206 18207 18208 18209 18210 18211 18212 18213 18214 18215 18216 18217 18218 18219 18220 18221 18222 18223 18224 18225 18226 18227 18228 18229 18230 18231 18232 18233 18234 18235 18236 18237 18238 18239 18240 18241 | option = Jim_FindByName(arg, jimReturnCodes, jimReturnCodesSize); } if (option < 0) { goto wrongargs; } if (ignore) { ignore_mask |= ((jim_wide)1 << option); } else { ignore_mask &= (~((jim_wide)1 << option)); } } argc -= i; if (argc < 1 || argc > 3) { wrongargs: Jim_WrongNumArgs(interp, 1, argv, "?-?no?code ... --? script ?resultVarName? ?optionVarName?"); return JIM_ERR; } argv += i; if ((ignore_mask & (1 << JIM_SIGNAL)) == 0) { sig++; } interp->signal_level += sig; if (Jim_CheckSignal(interp)) { exitCode = JIM_SIGNAL; } else { exitCode = Jim_EvalObj(interp, argv[0]); interp->errorFlag = 0; } interp->signal_level -= sig; if (exitCode >= 0 && exitCode < max_ignore_code && (((unsigned jim_wide)1 << exitCode) & ignore_mask)) { return exitCode; } if (sig && exitCode == JIM_SIGNAL) { if (interp->signal_set_result) { interp->signal_set_result(interp, interp->sigmask); } else { Jim_SetResultInt(interp, interp->sigmask); } interp->sigmask = 0; |
︙ | ︙ | |||
18199 18200 18201 18202 18203 18204 18205 | } } } Jim_SetResultInt(interp, exitCode); return JIM_OK; } | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | 18270 18271 18272 18273 18274 18275 18276 18277 18278 18279 18280 18281 18282 18283 | } } } Jim_SetResultInt(interp, exitCode); return JIM_OK; } static int Jim_RenameCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { if (argc != 3) { Jim_WrongNumArgs(interp, 1, argv, "oldName newName"); return JIM_ERR; |
︙ | ︙ | |||
18348 18349 18350 18351 18352 18353 18354 | static Jim_Obj *JimDictPatternMatch(Jim_Interp *interp, Jim_HashTable *ht, Jim_Obj *patternObjPtr, JimDictMatchCallbackType *callback, int type) { Jim_HashEntry *he; Jim_Obj *listObjPtr = Jim_NewListObj(interp, NULL, 0); | | | 18304 18305 18306 18307 18308 18309 18310 18311 18312 18313 18314 18315 18316 18317 18318 | static Jim_Obj *JimDictPatternMatch(Jim_Interp *interp, Jim_HashTable *ht, Jim_Obj *patternObjPtr, JimDictMatchCallbackType *callback, int type) { Jim_HashEntry *he; Jim_Obj *listObjPtr = Jim_NewListObj(interp, NULL, 0); Jim_HashTableIterator htiter; JimInitHashTableIterator(ht, &htiter); while ((he = Jim_NextHashEntry(&htiter)) != NULL) { if (patternObjPtr == NULL || JimGlobMatch(Jim_String(patternObjPtr), Jim_String((Jim_Obj *)he->key), 0)) { callback(interp, listObjPtr, he, type); } } |
︙ | ︙ | |||
18398 18399 18400 18401 18402 18403 18404 | if (SetDictFromAny(interp, objPtr) != JIM_OK) { return JIM_ERR; } ht = (Jim_HashTable *)objPtr->internalRep.ptr; | | | 18354 18355 18356 18357 18358 18359 18360 18361 18362 18363 18364 18365 18366 18367 18368 | if (SetDictFromAny(interp, objPtr) != JIM_OK) { return JIM_ERR; } ht = (Jim_HashTable *)objPtr->internalRep.ptr; printf("%d entries in table, %d buckets\n", ht->used, ht->size); for (i = 0; i < ht->size; i++) { Jim_HashEntry *he = ht->table[i]; if (he) { printf("%d: ", i); |
︙ | ︙ | |||
18522 18523 18524 18525 18526 18527 18528 | case OPT_MERGE: if (argc == 2) { return JIM_OK; } if (Jim_DictSize(interp, argv[2]) < 0) { return JIM_ERR; } | | | | | 18478 18479 18480 18481 18482 18483 18484 18485 18486 18487 18488 18489 18490 18491 18492 18493 18494 18495 18496 18497 18498 18499 18500 18501 18502 18503 18504 18505 18506 18507 18508 18509 18510 18511 18512 18513 18514 18515 18516 18517 18518 | case OPT_MERGE: if (argc == 2) { return JIM_OK; } if (Jim_DictSize(interp, argv[2]) < 0) { return JIM_ERR; } break; case OPT_UPDATE: if (argc < 6 || argc % 2) { argc = 2; } break; case OPT_CREATE: if (argc % 2) { Jim_WrongNumArgs(interp, 2, argv, "?key value ...?"); return JIM_ERR; } objPtr = Jim_NewDictObj(interp, argv + 2, argc - 2); Jim_SetResult(interp, objPtr); return JIM_OK; case OPT_INFO: if (argc != 3) { Jim_WrongNumArgs(interp, 2, argv, "dictionary"); return JIM_ERR; } return Jim_DictInfo(interp, argv[2]); } return Jim_EvalEnsemble(interp, "dict", options[option], argc - 2, argv + 2); } static int Jim_SubstCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { static const char * const options[] = { |
︙ | ︙ | |||
18618 18619 18620 18621 18622 18623 18624 | INFO_RETURNCODES, INFO_REFERENCES, INFO_ALIAS, }; #ifdef jim_ext_namespace int nons = 0; if (argc > 2 && Jim_CompareStringImmediate(interp, argv[1], "-nons")) { | | | | 18574 18575 18576 18577 18578 18579 18580 18581 18582 18583 18584 18585 18586 18587 18588 18589 18590 18591 18592 18593 18594 18595 18596 18597 18598 18599 18600 18601 18602 18603 18604 | INFO_RETURNCODES, INFO_REFERENCES, INFO_ALIAS, }; #ifdef jim_ext_namespace int nons = 0; if (argc > 2 && Jim_CompareStringImmediate(interp, argv[1], "-nons")) { argc--; argv++; nons = 1; } #endif if (argc < 2) { Jim_WrongNumArgs(interp, 1, argv, "subcommand ?args ...?"); return JIM_ERR; } if (Jim_GetEnum(interp, argv[1], commands, &cmd, "subcommand", JIM_ERRMSG | JIM_ENUM_ABBREV) != JIM_OK) { return JIM_ERR; } switch (cmd) { case INFO_EXISTS: if (argc != 3) { Jim_WrongNumArgs(interp, 2, argv, "varName"); return JIM_ERR; } Jim_SetResultBool(interp, Jim_GetVariable(interp, argv[2], 0) != NULL); |
︙ | ︙ | |||
18663 18664 18665 18666 18667 18668 18669 | return JIM_ERR; } Jim_SetResult(interp, (Jim_Obj *)cmdPtr->u.native.privData); return JIM_OK; } case INFO_CHANNELS: | | > | > | | > | > | | 18619 18620 18621 18622 18623 18624 18625 18626 18627 18628 18629 18630 18631 18632 18633 18634 18635 18636 18637 18638 18639 18640 18641 18642 18643 18644 18645 18646 18647 18648 18649 18650 18651 18652 18653 18654 18655 18656 18657 18658 18659 18660 18661 18662 18663 18664 18665 | return JIM_ERR; } Jim_SetResult(interp, (Jim_Obj *)cmdPtr->u.native.privData); return JIM_OK; } case INFO_CHANNELS: mode++; #ifndef jim_ext_aio Jim_SetResultString(interp, "aio not enabled", -1); return JIM_ERR; #endif case INFO_PROCS: mode++; case INFO_COMMANDS: if (argc != 2 && argc != 3) { Jim_WrongNumArgs(interp, 2, argv, "?pattern?"); return JIM_ERR; } #ifdef jim_ext_namespace if (!nons) { if (Jim_Length(interp->framePtr->nsObj) || (argc == 3 && JimGlobMatch("::*", Jim_String(argv[2]), 0))) { return Jim_EvalPrefix(interp, "namespace info", argc - 1, argv + 1); } } #endif Jim_SetResult(interp, JimCommandsList(interp, (argc == 3) ? argv[2] : NULL, mode)); break; case INFO_VARS: mode++; case INFO_LOCALS: mode++; case INFO_GLOBALS: if (argc != 2 && argc != 3) { Jim_WrongNumArgs(interp, 2, argv, "?pattern?"); return JIM_ERR; } #ifdef jim_ext_namespace if (!nons) { if (Jim_Length(interp->framePtr->nsObj) || (argc == 3 && JimGlobMatch("::*", Jim_String(argv[2]), 0))) { |
︙ | ︙ | |||
18801 18802 18803 18804 18805 18806 18807 | Jim_SetResult(interp, cmdPtr->u.proc.bodyObjPtr); break; case INFO_ARGS: Jim_SetResult(interp, cmdPtr->u.proc.argListObjPtr); break; case INFO_STATICS: if (cmdPtr->u.proc.staticVars) { | < | | 18761 18762 18763 18764 18765 18766 18767 18768 18769 18770 18771 18772 18773 18774 18775 18776 | Jim_SetResult(interp, cmdPtr->u.proc.bodyObjPtr); break; case INFO_ARGS: Jim_SetResult(interp, cmdPtr->u.proc.argListObjPtr); break; case INFO_STATICS: if (cmdPtr->u.proc.staticVars) { Jim_SetResult(interp, JimHashtablePatternMatch(interp, cmdPtr->u.proc.staticVars, NULL, JimVariablesMatch, JIM_VARLIST_LOCALS | JIM_VARLIST_VALUES)); } break; } break; } case INFO_VERSION: |
︙ | ︙ | |||
18825 18826 18827 18828 18829 18830 18831 | case INFO_COMPLETE: if (argc != 3 && argc != 4) { Jim_WrongNumArgs(interp, 2, argv, "script ?missing?"); return JIM_ERR; } else { | < < | | | | 18784 18785 18786 18787 18788 18789 18790 18791 18792 18793 18794 18795 18796 18797 18798 18799 18800 18801 18802 18803 18804 18805 18806 18807 18808 18809 18810 18811 18812 | case INFO_COMPLETE: if (argc != 3 && argc != 4) { Jim_WrongNumArgs(interp, 2, argv, "script ?missing?"); return JIM_ERR; } else { char missing; Jim_SetResultBool(interp, Jim_ScriptIsComplete(interp, argv[2], &missing)); if (missing != ' ' && argc == 4) { Jim_SetVariable(interp, argv[3], Jim_NewStringObj(interp, &missing, 1)); } } break; case INFO_HOSTNAME: return Jim_Eval(interp, "os.gethostname"); case INFO_NAMEOFEXECUTABLE: return Jim_Eval(interp, "{info nameofexecutable}"); case INFO_RETURNCODES: if (argc == 2) { int i; Jim_Obj *listObjPtr = Jim_NewListObj(interp, NULL, 0); |
︙ | ︙ | |||
18922 18923 18924 18925 18926 18927 18928 | return JIM_ERR; } if (option == OPT_VAR) { result = Jim_GetVariable(interp, objPtr, 0) != NULL; } else { | | | 18879 18880 18881 18882 18883 18884 18885 18886 18887 18888 18889 18890 18891 18892 18893 | return JIM_ERR; } if (option == OPT_VAR) { result = Jim_GetVariable(interp, objPtr, 0) != NULL; } else { Jim_Cmd *cmd = Jim_GetCommand(interp, objPtr, JIM_NONE); if (cmd) { switch (option) { case OPT_COMMAND: result = 1; break; |
︙ | ︙ | |||
18965 18966 18967 18968 18969 18970 18971 | str = Jim_GetString(argv[1], &len); if (len == 0) { return JIM_OK; } strLen = Jim_Utf8Length(interp, argv[1]); | | | | 18922 18923 18924 18925 18926 18927 18928 18929 18930 18931 18932 18933 18934 18935 18936 18937 18938 18939 18940 18941 18942 18943 18944 18945 18946 18947 18948 18949 | str = Jim_GetString(argv[1], &len); if (len == 0) { return JIM_OK; } strLen = Jim_Utf8Length(interp, argv[1]); if (argc == 2) { splitChars = " \n\t\r"; splitLen = 4; } else { splitChars = Jim_String(argv[2]); splitLen = Jim_Utf8Length(interp, argv[2]); } noMatchStart = str; resObjPtr = Jim_NewListObj(interp, NULL, 0); if (splitLen) { Jim_Obj *objPtr; while (strLen--) { const char *sc = splitChars; int scLen = splitLen; int sl = utf8_tounicode(str, &c); while (scLen--) { |
︙ | ︙ | |||
19007 19008 19009 19010 19011 19012 19013 | else { Jim_Obj **commonObj = NULL; #define NUM_COMMON (128 - 9) while (strLen--) { int n = utf8_tounicode(str, &c); #ifdef JIM_OPTIMIZATION if (c >= 9 && c < 128) { | | | 18964 18965 18966 18967 18968 18969 18970 18971 18972 18973 18974 18975 18976 18977 18978 | else { Jim_Obj **commonObj = NULL; #define NUM_COMMON (128 - 9) while (strLen--) { int n = utf8_tounicode(str, &c); #ifdef JIM_OPTIMIZATION if (c >= 9 && c < 128) { c -= 9; if (!commonObj) { commonObj = Jim_Alloc(sizeof(*commonObj) * NUM_COMMON); memset(commonObj, 0, sizeof(*commonObj) * NUM_COMMON); } if (!commonObj[c]) { commonObj[c] = Jim_NewStringObj(interp, str, 1); |
︙ | ︙ | |||
19041 19042 19043 19044 19045 19046 19047 | const char *joinStr; int joinStrLen; if (argc != 2 && argc != 3) { Jim_WrongNumArgs(interp, 1, argv, "list ?joinString?"); return JIM_ERR; } | | | 18998 18999 19000 19001 19002 19003 19004 19005 19006 19007 19008 19009 19010 19011 19012 | const char *joinStr; int joinStrLen; if (argc != 2 && argc != 3) { Jim_WrongNumArgs(interp, 1, argv, "list ?joinString?"); return JIM_ERR; } if (argc == 2) { joinStr = " "; joinStrLen = 1; } else { joinStr = Jim_GetString(argv[2], &joinStrLen); } |
︙ | ︙ | |||
19320 19321 19322 19323 19324 19325 19326 | return 0; else if (step > 0 && start > end) return -1; else if (step < 0 && end > start) return -1; len = end - start; if (len < 0) | | | | 19277 19278 19279 19280 19281 19282 19283 19284 19285 19286 19287 19288 19289 19290 19291 19292 19293 | return 0; else if (step > 0 && start > end) return -1; else if (step < 0 && end > start) return -1; len = end - start; if (len < 0) len = -len; if (step < 0) step = -step; len = 1 + ((len - 1) / step); if (len > INT_MAX) len = INT_MAX; return (int)((len < 0) ? -1 : len); } |
︙ | ︙ | |||
19540 19541 19542 19543 19544 19545 19546 | int arglen; const char *arg = Jim_GetString(objPtr, &arglen); *indexPtr = -1; for (entryPtr = tablePtr, i = 0; *entryPtr != NULL; entryPtr++, i++) { if (Jim_CompareStringImmediate(interp, objPtr, *entryPtr)) { | | | | 19497 19498 19499 19500 19501 19502 19503 19504 19505 19506 19507 19508 19509 19510 19511 19512 19513 19514 19515 19516 19517 19518 19519 19520 19521 19522 19523 19524 19525 19526 19527 19528 19529 | int arglen; const char *arg = Jim_GetString(objPtr, &arglen); *indexPtr = -1; for (entryPtr = tablePtr, i = 0; *entryPtr != NULL; entryPtr++, i++) { if (Jim_CompareStringImmediate(interp, objPtr, *entryPtr)) { *indexPtr = i; return JIM_OK; } if (flags & JIM_ENUM_ABBREV) { if (strncmp(arg, *entryPtr, arglen) == 0) { if (*arg == '-' && arglen == 1) { break; } if (match >= 0) { bad = "ambiguous "; goto ambiguous; } match = i; } } } if (match >= 0) { *indexPtr = match; return JIM_OK; } ambiguous: if (flags & JIM_ERRMSG) { |
︙ | ︙ | |||
19595 19596 19597 19598 19599 19600 19601 | int Jim_IsList(Jim_Obj *objPtr) { return objPtr->typePtr == &listObjType; } void Jim_SetResultFormatted(Jim_Interp *interp, const char *format, ...) { | | | 19552 19553 19554 19555 19556 19557 19558 19559 19560 19561 19562 19563 19564 19565 19566 | int Jim_IsList(Jim_Obj *objPtr) { return objPtr->typePtr == &listObjType; } void Jim_SetResultFormatted(Jim_Interp *interp, const char *format, ...) { int len = strlen(format); int extra = 0; int n = 0; const char *params[5]; char *buf; va_list args; int i; |
︙ | ︙ | |||
19660 19661 19662 19663 19664 19665 19666 | #include <stdio.h> #include <string.h> static int subcmd_null(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { | | | 19617 19618 19619 19620 19621 19622 19623 19624 19625 19626 19627 19628 19629 19630 19631 | #include <stdio.h> #include <string.h> static int subcmd_null(Jim_Interp *interp, int argc, Jim_Obj *const *argv) { return JIM_OK; } static const jim_subcmd_type dummy_subcmd = { "dummy", NULL, subcmd_null, 0, 0, JIM_MODFLAG_HIDDEN }; |
︙ | ︙ | |||
19739 19740 19741 19742 19743 19744 19745 | " command ...\"\n", NULL); Jim_AppendStrings(interp, Jim_GetResult(interp), "Use \"", cmdname, " -help ?command?\" for help", NULL); return 0; } cmd = argv[1]; | | | | | | | | | | | | | | | | | 19696 19697 19698 19699 19700 19701 19702 19703 19704 19705 19706 19707 19708 19709 19710 19711 19712 19713 19714 19715 19716 19717 19718 19719 19720 19721 19722 19723 19724 19725 19726 19727 19728 19729 19730 19731 19732 19733 19734 19735 19736 19737 19738 19739 19740 19741 19742 19743 19744 19745 19746 19747 19748 19749 19750 19751 19752 19753 19754 19755 19756 19757 19758 19759 19760 19761 19762 19763 19764 19765 19766 19767 19768 19769 19770 19771 19772 19773 19774 19775 19776 19777 19778 19779 19780 19781 19782 19783 19784 19785 19786 19787 | " command ...\"\n", NULL); Jim_AppendStrings(interp, Jim_GetResult(interp), "Use \"", cmdname, " -help ?command?\" for help", NULL); return 0; } cmd = argv[1]; if (Jim_CompareStringImmediate(interp, cmd, "-help")) { if (argc == 2) { show_cmd_usage(interp, command_table, argc, argv); return &dummy_subcmd; } help = 1; cmd = argv[2]; } if (Jim_CompareStringImmediate(interp, cmd, "-commands")) { Jim_SetResult(interp, Jim_NewEmptyStringObj(interp)); add_commands(interp, command_table, " "); return &dummy_subcmd; } cmdstr = Jim_GetString(cmd, &cmdlen); for (ct = command_table; ct->cmd; ct++) { if (Jim_CompareStringImmediate(interp, cmd, ct->cmd)) { break; } if (strncmp(cmdstr, ct->cmd, cmdlen) == 0) { if (partial) { if (help) { show_cmd_usage(interp, command_table, argc, argv); return &dummy_subcmd; } bad_subcmd(interp, command_table, "ambiguous", argv[0], argv[1 + help]); return 0; } partial = ct; } continue; } if (partial && !ct->cmd) { ct = partial; } if (!ct->cmd) { if (help) { show_cmd_usage(interp, command_table, argc, argv); return &dummy_subcmd; } bad_subcmd(interp, command_table, "unknown", argv[0], argv[1 + help]); return 0; } if (help) { Jim_SetResultString(interp, "Usage: ", -1); add_cmd_usage(interp, ct, argv[0]); return &dummy_subcmd; } if (argc - 2 < ct->minargs || (ct->maxargs >= 0 && argc - 2 > ct->maxargs)) { Jim_SetResultString(interp, "wrong # args: should be \"", -1); add_cmd_usage(interp, ct, argv[0]); Jim_AppendStrings(interp, Jim_GetResult(interp), "\"", NULL); return 0; } return ct; } int Jim_CallSubCmd(Jim_Interp *interp, const jim_subcmd_type * ct, int argc, Jim_Obj *const *argv) { int ret = JIM_ERR; |
︙ | ︙ | |||
19871 19872 19873 19874 19875 19876 19877 | } else if (uc <= 0xffff) { *p++ = 0xe0 | ((uc & 0xf000) >> 12); *p++ = 0x80 | ((uc & 0xfc0) >> 6); *p = 0x80 | (uc & 0x3f); return 3; } | | | 19828 19829 19830 19831 19832 19833 19834 19835 19836 19837 19838 19839 19840 19841 19842 | } else if (uc <= 0xffff) { *p++ = 0xe0 | ((uc & 0xf000) >> 12); *p++ = 0x80 | ((uc & 0xfc0) >> 6); *p = 0x80 | (uc & 0x3f); return 3; } else { *p++ = 0xf0 | ((uc & 0x1c0000) >> 18); *p++ = 0x80 | ((uc & 0x3f000) >> 12); *p++ = 0x80 | ((uc & 0xfc0) >> 6); *p = 0x80 | (uc & 0x3f); return 4; } |
︙ | ︙ | |||
20062 20063 20064 20065 20066 20067 20068 | useShort = 0; if (ch == 'h') { useShort = 1; format += step; step = utf8_tounicode(format, &ch); } else if (ch == 'l') { | | | 20019 20020 20021 20022 20023 20024 20025 20026 20027 20028 20029 20030 20031 20032 20033 | useShort = 0; if (ch == 'h') { useShort = 1; format += step; step = utf8_tounicode(format, &ch); } else if (ch == 'l') { format += step; step = utf8_tounicode(format, &ch); if (ch == 'l') { format += step; step = utf8_tounicode(format, &ch); } } |
︙ | ︙ | |||
20089 20090 20091 20092 20093 20094 20095 | case '\0': msg = "format string ended in middle of field specifier"; goto errorMsg; case 's': { formatted_buf = Jim_GetString(objv[objIndex], &formatted_bytes); formatted_chars = Jim_Utf8Length(interp, objv[objIndex]); if (gotPrecision && (precision < formatted_chars)) { | | | | | 20046 20047 20048 20049 20050 20051 20052 20053 20054 20055 20056 20057 20058 20059 20060 20061 20062 20063 20064 20065 20066 20067 20068 20069 20070 20071 20072 20073 20074 20075 20076 20077 20078 20079 20080 20081 20082 20083 20084 20085 20086 20087 20088 20089 20090 | case '\0': msg = "format string ended in middle of field specifier"; goto errorMsg; case 's': { formatted_buf = Jim_GetString(objv[objIndex], &formatted_bytes); formatted_chars = Jim_Utf8Length(interp, objv[objIndex]); if (gotPrecision && (precision < formatted_chars)) { formatted_chars = precision; formatted_bytes = utf8_index(formatted_buf, precision); } break; } case 'c': { jim_wide code; if (Jim_GetWide(interp, objv[objIndex], &code) != JIM_OK) { goto error; } formatted_bytes = utf8_getchars(spec, code); formatted_buf = spec; formatted_chars = 1; break; } case 'b': { unsigned jim_wide w; int length; int i; int j; if (Jim_GetWide(interp, objv[objIndex], (jim_wide *)&w) != JIM_OK) { goto error; } length = sizeof(w) * 8; if (num_buffer_size < length + 1) { num_buffer_size = length + 1; num_buffer = Jim_Realloc(num_buffer, num_buffer_size); } j = 0; for (i = length; i > 0; ) { |
︙ | ︙ | |||
20147 20148 20149 20150 20151 20152 20153 | case 'e': case 'E': case 'f': case 'g': case 'G': doubleType = 1; | | | | | 20104 20105 20106 20107 20108 20109 20110 20111 20112 20113 20114 20115 20116 20117 20118 20119 20120 20121 20122 20123 20124 20125 20126 20127 20128 20129 20130 20131 20132 20133 20134 20135 20136 | case 'e': case 'E': case 'f': case 'g': case 'G': doubleType = 1; case 'd': case 'u': case 'o': case 'x': case 'X': { jim_wide w; double d; int length; if (width) { p += sprintf(p, "%ld", width); } if (gotPrecision) { p += sprintf(p, ".%ld", precision); } if (doubleType) { if (Jim_GetDouble(interp, objv[objIndex], &d) != JIM_OK) { goto error; } length = MAX_FLOAT_WIDTH; } else { |
︙ | ︙ | |||
20196 20197 20198 20199 20200 20201 20202 | } #endif } *p++ = (char) ch; *p = '\0'; | | | | | 20153 20154 20155 20156 20157 20158 20159 20160 20161 20162 20163 20164 20165 20166 20167 20168 20169 20170 20171 20172 20173 20174 20175 20176 20177 20178 20179 20180 20181 20182 20183 20184 20185 20186 20187 20188 20189 20190 20191 20192 20193 | } #endif } *p++ = (char) ch; *p = '\0'; if (width > length) { length = width; } if (gotPrecision) { length += precision; } if (num_buffer_size < length + 1) { num_buffer_size = length + 1; num_buffer = Jim_Realloc(num_buffer, num_buffer_size); } if (doubleType) { snprintf(num_buffer, length + 1, spec, d); } else { formatted_bytes = snprintf(num_buffer, length + 1, spec, w); } formatted_chars = formatted_bytes = strlen(num_buffer); formatted_buf = num_buffer; break; } default: { spec[0] = ch; spec[1] = '\0'; Jim_SetResultFormatted(interp, "bad field specifier \"%s\"", spec); goto error; } } |
︙ | ︙ | |||
20274 20275 20276 20277 20278 20279 20280 | #define REG_MAX_PAREN 100 | | | | | | | | | | | | | | | | > | | | | | | | | | | | | | 20231 20232 20233 20234 20235 20236 20237 20238 20239 20240 20241 20242 20243 20244 20245 20246 20247 20248 20249 20250 20251 20252 20253 20254 20255 20256 20257 20258 20259 20260 20261 20262 20263 20264 20265 20266 20267 20268 20269 20270 20271 20272 20273 20274 20275 20276 20277 20278 20279 20280 20281 20282 20283 20284 20285 20286 20287 20288 20289 20290 20291 20292 20293 20294 20295 | #define REG_MAX_PAREN 100 #define END 0 #define BOL 1 #define EOL 2 #define ANY 3 #define ANYOF 4 #define ANYBUT 5 #define BRANCH 6 #define BACK 7 #define EXACTLY 8 #define NOTHING 9 #define REP 10 #define REPMIN 11 #define REPX 12 #define REPXMIN 13 #define BOLX 14 #define EOLX 15 #define WORDA 16 #define WORDZ 17 #define OPENNC 1000 #define OPEN 1001 #define CLOSENC 2000 #define CLOSE 2001 #define CLOSE_END (CLOSE+REG_MAX_PAREN) #define REG_MAGIC 0xFADED00D #define OP(preg, p) (preg->program[p]) #define NEXT(preg, p) (preg->program[p + 1]) #define OPERAND(p) ((p) + 2) #define FAIL(R,M) { (R)->err = (M); return (M); } #define ISMULT(c) ((c) == '*' || (c) == '+' || (c) == '?' || (c) == '{') #define META "^$.[()|?{+*" #define HASWIDTH 1 #define SIMPLE 2 #define SPSTART 4 #define WORST 0 #define MAX_REP_COUNT 1000000 static int reg(regex_t *preg, int paren, int *flagp ); static int regpiece(regex_t *preg, int *flagp ); static int regbranch(regex_t *preg, int *flagp ); static int regatom(regex_t *preg, int *flagp ); static int regnode(regex_t *preg, int op ); static int regnext(regex_t *preg, int p ); static void regc(regex_t *preg, int b ); static int reginsert(regex_t *preg, int op, int size, int opnd ); |
︙ | ︙ | |||
20371 20372 20373 20374 20375 20376 20377 | fprintf(stderr, "Compiling: '%s'\n", exp); #endif memset(preg, 0, sizeof(*preg)); if (exp == NULL) FAIL(preg, REG_ERR_NULL_ARGUMENT); | | | | | | | | | | | 20329 20330 20331 20332 20333 20334 20335 20336 20337 20338 20339 20340 20341 20342 20343 20344 20345 20346 20347 20348 20349 20350 20351 20352 20353 20354 20355 20356 20357 20358 20359 20360 20361 20362 20363 20364 20365 20366 20367 20368 20369 20370 20371 | fprintf(stderr, "Compiling: '%s'\n", exp); #endif memset(preg, 0, sizeof(*preg)); if (exp == NULL) FAIL(preg, REG_ERR_NULL_ARGUMENT); preg->cflags = cflags; preg->regparse = exp; preg->proglen = (strlen(exp) + 1) * 5; preg->program = malloc(preg->proglen * sizeof(int)); if (preg->program == NULL) FAIL(preg, REG_ERR_NOMEM); regc(preg, REG_MAGIC); if (reg(preg, 0, &flags) == 0) { return preg->err; } if (preg->re_nsub >= REG_MAX_PAREN) FAIL(preg,REG_ERR_TOO_BIG); preg->regstart = 0; preg->reganch = 0; preg->regmust = 0; preg->regmlen = 0; scan = 1; if (OP(preg, regnext(preg, scan)) == END) { scan = OPERAND(scan); if (OP(preg, scan) == EXACTLY) { preg->regstart = preg->program[OPERAND(scan)]; } else if (OP(preg, scan) == BOL) preg->reganch++; if (flags&SPSTART) { |
︙ | ︙ | |||
20430 20431 20432 20433 20434 20435 20436 | #ifdef DEBUG regdump(preg); #endif return 0; } | | | | | | | | | | | | 20388 20389 20390 20391 20392 20393 20394 20395 20396 20397 20398 20399 20400 20401 20402 20403 20404 20405 20406 20407 20408 20409 20410 20411 20412 20413 20414 20415 20416 20417 20418 20419 20420 20421 20422 20423 20424 20425 20426 20427 20428 20429 20430 20431 20432 20433 20434 20435 20436 20437 20438 20439 20440 20441 20442 20443 20444 20445 20446 20447 20448 20449 20450 20451 20452 20453 20454 20455 20456 | #ifdef DEBUG regdump(preg); #endif return 0; } static int reg(regex_t *preg, int paren, int *flagp ) { int ret; int br; int ender; int parno = 0; int flags; *flagp = HASWIDTH; if (paren) { if (preg->regparse[0] == '?' && preg->regparse[1] == ':') { preg->regparse += 2; parno = -1; } else { parno = ++preg->re_nsub; } ret = regnode(preg, OPEN+parno); } else ret = 0; br = regbranch(preg, &flags); if (br == 0) return 0; if (ret != 0) regtail(preg, ret, br); else ret = br; if (!(flags&HASWIDTH)) *flagp &= ~HASWIDTH; *flagp |= flags&SPSTART; while (*preg->regparse == '|') { preg->regparse++; br = regbranch(preg, &flags); if (br == 0) return 0; regtail(preg, ret, br); if (!(flags&HASWIDTH)) *flagp &= ~HASWIDTH; *flagp |= flags&SPSTART; } ender = regnode(preg, (paren) ? CLOSE+parno : END); regtail(preg, ret, ender); for (br = ret; br != 0; br = regnext(preg, br)) regoptail(preg, br, ender); if (paren && *preg->regparse++ != ')') { preg->err = REG_ERR_UNMATCHED_PAREN; return 0; } else if (!paren && *preg->regparse != '\0') { if (*preg->regparse == ')') { preg->err = REG_ERR_UNMATCHED_PAREN; return 0; |
︙ | ︙ | |||
20508 20509 20510 20511 20512 20513 20514 | static int regbranch(regex_t *preg, int *flagp ) { int ret; int chain; int latest; int flags; | | | | 20466 20467 20468 20469 20470 20471 20472 20473 20474 20475 20476 20477 20478 20479 20480 20481 20482 20483 20484 20485 20486 20487 20488 20489 20490 20491 20492 20493 20494 20495 20496 20497 20498 | static int regbranch(regex_t *preg, int *flagp ) { int ret; int chain; int latest; int flags; *flagp = WORST; ret = regnode(preg, BRANCH); chain = 0; while (*preg->regparse != '\0' && *preg->regparse != ')' && *preg->regparse != '|') { latest = regpiece(preg, &flags); if (latest == 0) return 0; *flagp |= flags&HASWIDTH; if (chain == 0) { *flagp |= flags&SPSTART; } else { regtail(preg, chain, latest); } chain = latest; } if (chain == 0) (void) regnode(preg, NOTHING); return(ret); } static int regpiece(regex_t *preg, int *flagp) { |
︙ | ︙ | |||
20556 20557 20558 20559 20560 20561 20562 | } if (!(flags&HASWIDTH) && op != '?') { preg->err = REG_ERR_OPERAND_COULD_BE_EMPTY; return 0; } | | | 20514 20515 20516 20517 20518 20519 20520 20521 20522 20523 20524 20525 20526 20527 20528 | } if (!(flags&HASWIDTH) && op != '?') { preg->err = REG_ERR_OPERAND_COULD_BE_EMPTY; return 0; } if (op == '{') { char *end; min = strtoul(preg->regparse + 1, &end, 10); if (end == preg->regparse + 1) { preg->err = REG_ERR_BAD_COUNT; return 0; |
︙ | ︙ | |||
20628 20629 20630 20631 20632 20633 20634 | } static void reg_addrange(regex_t *preg, int lower, int upper) { if (lower > upper) { reg_addrange(preg, upper, lower); } | | | 20586 20587 20588 20589 20590 20591 20592 20593 20594 20595 20596 20597 20598 20599 20600 | } static void reg_addrange(regex_t *preg, int lower, int upper) { if (lower > upper) { reg_addrange(preg, upper, lower); } regc(preg, upper - lower + 1); regc(preg, lower); } static void reg_addrange_str(regex_t *preg, const char *str) { while (*str) { |
︙ | ︙ | |||
20696 20697 20698 20699 20700 20701 20702 | case 'f': *ch = '\f'; break; case 'n': *ch = '\n'; break; case 'r': *ch = '\r'; break; case 't': *ch = '\t'; break; case 'v': *ch = '\v'; break; case 'u': if (*s == '{') { | | | | 20654 20655 20656 20657 20658 20659 20660 20661 20662 20663 20664 20665 20666 20667 20668 20669 20670 20671 20672 20673 20674 | case 'f': *ch = '\f'; break; case 'n': *ch = '\n'; break; case 'r': *ch = '\r'; break; case 't': *ch = '\t'; break; case 'v': *ch = '\v'; break; case 'u': if (*s == '{') { n = parse_hex(s + 1, 6, ch); if (n > 0 && s[n + 1] == '}' && *ch >= 0 && *ch <= 0x1fffff) { s += n + 2; } else { *ch = 'u'; } } else if ((n = parse_hex(s, 4, ch)) > 0) { s += n; } break; |
︙ | ︙ | |||
20737 20738 20739 20740 20741 20742 20743 | int ret; int flags; int nocase = (preg->cflags & REG_ICASE); int ch; int n = reg_utf8_tounicode_case(preg->regparse, &ch, nocase); | | | | | | | | > > > > > > > > > > | > > | < | < | | | > > > > > | > | | | | > > | < | < < > | > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > | | 20695 20696 20697 20698 20699 20700 20701 20702 20703 20704 20705 20706 20707 20708 20709 20710 20711 20712 20713 20714 20715 20716 20717 20718 20719 20720 20721 20722 20723 20724 20725 20726 20727 20728 20729 20730 20731 20732 20733 20734 20735 20736 20737 20738 20739 20740 20741 20742 20743 20744 20745 20746 20747 20748 20749 20750 20751 20752 20753 20754 20755 20756 20757 20758 20759 20760 20761 20762 20763 20764 20765 20766 20767 20768 20769 20770 20771 20772 20773 20774 20775 20776 20777 20778 20779 20780 20781 20782 20783 20784 20785 20786 20787 20788 20789 20790 20791 20792 20793 20794 20795 20796 20797 20798 20799 20800 20801 20802 20803 20804 20805 20806 20807 20808 20809 20810 20811 20812 20813 20814 20815 20816 20817 20818 20819 20820 20821 20822 20823 20824 20825 20826 20827 20828 20829 20830 20831 20832 20833 20834 20835 20836 20837 | int ret; int flags; int nocase = (preg->cflags & REG_ICASE); int ch; int n = reg_utf8_tounicode_case(preg->regparse, &ch, nocase); *flagp = WORST; preg->regparse += n; switch (ch) { case '^': ret = regnode(preg, BOL); break; case '$': ret = regnode(preg, EOL); break; case '.': ret = regnode(preg, ANY); *flagp |= HASWIDTH|SIMPLE; break; case '[': { const char *pattern = preg->regparse; if (*pattern == '^') { ret = regnode(preg, ANYBUT); pattern++; } else ret = regnode(preg, ANYOF); if (*pattern == ']' || *pattern == '-') { reg_addrange(preg, *pattern, *pattern); pattern++; } while (*pattern && *pattern != ']') { int start; int end; pattern += reg_utf8_tounicode_case(pattern, &start, nocase); if (start == '\\') { pattern += reg_decode_escape(pattern, &start); if (start == 0) { preg->err = REG_ERR_NULL_CHAR; return 0; } } if (pattern[0] == '-' && pattern[1] && pattern[1] != ']') { pattern += utf8_tounicode(pattern, &end); pattern += reg_utf8_tounicode_case(pattern, &end, nocase); if (end == '\\') { pattern += reg_decode_escape(pattern, &end); if (end == 0) { preg->err = REG_ERR_NULL_CHAR; return 0; } } reg_addrange(preg, start, end); continue; } if (start == '[' && pattern[0] == ':') { static const char *character_class[] = { ":alpha:", ":alnum:", ":space:", ":blank:", ":upper:", ":lower:", ":digit:", ":xdigit:", ":cntrl:", ":graph:", ":print:", ":punct:", }; enum { CC_ALPHA, CC_ALNUM, CC_SPACE, CC_BLANK, CC_UPPER, CC_LOWER, CC_DIGIT, CC_XDIGIT, CC_CNTRL, CC_GRAPH, CC_PRINT, CC_PUNCT, CC_NUM }; int i; for (i = 0; i < CC_NUM; i++) { n = strlen(character_class[i]); if (strncmp(pattern, character_class[i], n) == 0) { pattern += n + 1; break; } } if (i != CC_NUM) { switch (i) { case CC_ALNUM: reg_addrange(preg, '0', '9'); case CC_ALPHA: if ((preg->cflags & REG_ICASE) == 0) { reg_addrange(preg, 'a', 'z'); } reg_addrange(preg, 'A', 'Z'); break; case CC_SPACE: reg_addrange_str(preg, " \t\r\n\f\v"); break; case CC_BLANK: reg_addrange_str(preg, " \t"); break; case CC_UPPER: reg_addrange(preg, 'A', 'Z'); break; case CC_LOWER: reg_addrange(preg, 'a', 'z'); break; case CC_XDIGIT: reg_addrange(preg, 'a', 'f'); reg_addrange(preg, 'A', 'F'); case CC_DIGIT: reg_addrange(preg, '0', '9'); break; case CC_CNTRL: reg_addrange(preg, 0, 31); reg_addrange(preg, 127, 127); break; case CC_PRINT: reg_addrange(preg, ' ', '~'); break; case CC_GRAPH: reg_addrange(preg, '!', '~'); break; case CC_PUNCT: reg_addrange(preg, '!', '/'); reg_addrange(preg, ':', '@'); reg_addrange(preg, '[', '`'); reg_addrange(preg, '{', '~'); break; } continue; } } reg_addrange(preg, start, start); } regc(preg, '\0'); if (*pattern) { pattern++; } |
︙ | ︙ | |||
20842 20843 20844 20845 20846 20847 20848 | return 0; *flagp |= flags&(HASWIDTH|SPSTART); break; case '\0': case '|': case ')': preg->err = REG_ERR_INTERNAL; | | | > > > > > > > > | > | > | | | | | | | | | | | | | 20846 20847 20848 20849 20850 20851 20852 20853 20854 20855 20856 20857 20858 20859 20860 20861 20862 20863 20864 20865 20866 20867 20868 20869 20870 20871 20872 20873 20874 20875 20876 20877 20878 20879 20880 20881 20882 20883 20884 20885 20886 20887 20888 20889 20890 20891 20892 20893 20894 20895 20896 20897 20898 20899 20900 20901 20902 20903 20904 20905 20906 20907 20908 20909 20910 20911 20912 20913 20914 20915 20916 20917 20918 20919 20920 20921 20922 20923 20924 20925 20926 20927 20928 20929 20930 20931 20932 20933 20934 20935 20936 20937 20938 20939 20940 20941 20942 20943 20944 20945 20946 20947 20948 20949 20950 20951 20952 20953 20954 20955 20956 20957 20958 20959 20960 | return 0; *flagp |= flags&(HASWIDTH|SPSTART); break; case '\0': case '|': case ')': preg->err = REG_ERR_INTERNAL; return 0; case '?': case '+': case '*': case '{': preg->err = REG_ERR_COUNT_FOLLOWS_NOTHING; return 0; case '\\': ch = *preg->regparse++; switch (ch) { case '\0': preg->err = REG_ERR_TRAILING_BACKSLASH; return 0; case 'A': ret = regnode(preg, BOLX); break; case 'Z': ret = regnode(preg, EOLX); break; case '<': case 'm': ret = regnode(preg, WORDA); break; case '>': case 'M': ret = regnode(preg, WORDZ); break; case 'd': case 'D': ret = regnode(preg, ch == 'd' ? ANYOF : ANYBUT); reg_addrange(preg, '0', '9'); regc(preg, '\0'); *flagp |= HASWIDTH|SIMPLE; break; case 'w': case 'W': ret = regnode(preg, ch == 'w' ? ANYOF : ANYBUT); if ((preg->cflags & REG_ICASE) == 0) { reg_addrange(preg, 'a', 'z'); } reg_addrange(preg, 'A', 'Z'); reg_addrange(preg, '0', '9'); reg_addrange(preg, '_', '_'); regc(preg, '\0'); *flagp |= HASWIDTH|SIMPLE; break; case 's': case 'S': ret = regnode(preg, ch == 's' ? ANYOF : ANYBUT); reg_addrange_str(preg," \t\r\n\f\v"); regc(preg, '\0'); *flagp |= HASWIDTH|SIMPLE; break; default: preg->regparse--; goto de_fault; } break; de_fault: default: { int added = 0; preg->regparse -= n; ret = regnode(preg, EXACTLY); while (*preg->regparse && strchr(META, *preg->regparse) == NULL) { n = reg_utf8_tounicode_case(preg->regparse, &ch, (preg->cflags & REG_ICASE)); if (ch == '\\' && preg->regparse[n]) { if (strchr("<>mMwWdDsSAZ", preg->regparse[n])) { break; } n += reg_decode_escape(preg->regparse + n, &ch); if (ch == 0) { preg->err = REG_ERR_NULL_CHAR; return 0; } } if (ISMULT(preg->regparse[n])) { if (added) { break; } regc(preg, ch); added++; preg->regparse += n; break; } regc(preg, ch); added++; preg->regparse += n; } regc(preg, '\0'); *flagp |= HASWIDTH; |
︙ | ︙ | |||
20963 20964 20965 20966 20967 20968 20969 | } static int regnode(regex_t *preg, int op) { reg_grow(preg, 2); | | | | | | | | | | | | | | | | | | | | | | | 20977 20978 20979 20980 20981 20982 20983 20984 20985 20986 20987 20988 20989 20990 20991 20992 20993 20994 20995 20996 20997 20998 20999 21000 21001 21002 21003 21004 21005 21006 21007 21008 21009 21010 21011 21012 21013 21014 21015 21016 21017 21018 21019 21020 21021 21022 21023 21024 21025 21026 21027 21028 21029 21030 21031 21032 21033 21034 21035 21036 21037 21038 21039 21040 21041 21042 21043 21044 21045 21046 21047 21048 21049 21050 21051 21052 21053 21054 21055 21056 21057 21058 21059 21060 21061 21062 21063 21064 21065 21066 21067 21068 21069 21070 21071 21072 21073 21074 21075 21076 21077 21078 21079 21080 21081 21082 21083 21084 21085 21086 21087 21088 21089 21090 21091 21092 21093 21094 21095 21096 21097 21098 21099 21100 21101 21102 21103 21104 21105 21106 21107 21108 21109 21110 21111 21112 21113 21114 21115 21116 21117 21118 21119 21120 21121 21122 21123 21124 21125 21126 21127 21128 21129 21130 21131 21132 21133 21134 21135 21136 21137 21138 21139 21140 21141 21142 21143 21144 21145 21146 21147 21148 21149 21150 21151 21152 21153 21154 21155 21156 21157 21158 21159 21160 21161 | } static int regnode(regex_t *preg, int op) { reg_grow(preg, 2); preg->program[preg->p++] = op; preg->program[preg->p++] = 0; return preg->p - 2; } static void regc(regex_t *preg, int b ) { reg_grow(preg, 1); preg->program[preg->p++] = b; } static int reginsert(regex_t *preg, int op, int size, int opnd ) { reg_grow(preg, size); memmove(preg->program + opnd + size, preg->program + opnd, sizeof(int) * (preg->p - opnd)); memset(preg->program + opnd, 0, sizeof(int) * size); preg->program[opnd] = op; preg->p += size; return opnd + size; } static void regtail(regex_t *preg, int p, int val) { int scan; int temp; int offset; scan = p; for (;;) { temp = regnext(preg, scan); if (temp == 0) break; scan = temp; } if (OP(preg, scan) == BACK) offset = scan - val; else offset = val - scan; preg->program[scan + 1] = offset; } static void regoptail(regex_t *preg, int p, int val ) { if (p != 0 && OP(preg, p) == BRANCH) { regtail(preg, OPERAND(p), val); } } static int regtry(regex_t *preg, const char *string ); static int regmatch(regex_t *preg, int prog); static int regrepeat(regex_t *preg, int p, int max); int regexec(regex_t *preg, const char *string, size_t nmatch, regmatch_t pmatch[], int eflags) { const char *s; int scan; if (preg == NULL || preg->program == NULL || string == NULL) { return REG_ERR_NULL_ARGUMENT; } if (*preg->program != REG_MAGIC) { return REG_ERR_CORRUPTED; } #ifdef DEBUG fprintf(stderr, "regexec: %s\n", string); regdump(preg); #endif preg->eflags = eflags; preg->pmatch = pmatch; preg->nmatch = nmatch; preg->start = string; for (scan = OPERAND(1); scan != 0; scan += regopsize(preg, scan)) { int op = OP(preg, scan); if (op == END) break; if (op == REPX || op == REPXMIN) preg->program[scan + 4] = 0; } if (preg->regmust != 0) { s = string; while ((s = str_find(s, preg->program[preg->regmust], preg->cflags & REG_ICASE)) != NULL) { if (prefix_cmp(preg->program + preg->regmust, preg->regmlen, s, preg->cflags & REG_ICASE) >= 0) { break; } s++; } if (s == NULL) return REG_NOMATCH; } preg->regbol = string; if (preg->reganch) { if (eflags & REG_NOTBOL) { goto nextline; } while (1) { if (regtry(preg, string)) { return REG_NOERROR; } if (*string) { nextline: if (preg->cflags & REG_NEWLINE) { string = strchr(string, '\n'); if (string) { preg->regbol = ++string; continue; } } } return REG_NOMATCH; } } s = string; if (preg->regstart != '\0') { while ((s = str_find(s, preg->regstart, preg->cflags & REG_ICASE)) != NULL) { if (regtry(preg, s)) return REG_NOERROR; s++; } } else while (1) { if (regtry(preg, s)) return REG_NOERROR; if (*s == '\0') { break; } else { int c; s += utf8_tounicode(s, &c); } } return REG_NOMATCH; } static int regtry( regex_t *preg, const char *string ) { int i; preg->reginput = string; for (i = 0; i < preg->nmatch; i++) { |
︙ | ︙ | |||
21174 21175 21176 21177 21178 21179 21180 | } return -1; } static int reg_range_find(const int *range, int c) { while (*range) { | | | | 21188 21189 21190 21191 21192 21193 21194 21195 21196 21197 21198 21199 21200 21201 21202 21203 21204 21205 21206 21207 21208 21209 21210 21211 21212 21213 21214 | } return -1; } static int reg_range_find(const int *range, int c) { while (*range) { if (c >= range[1] && c <= (range[0] + range[1] - 1)) { return 1; } range += 2; } return 0; } static const char *str_find(const char *string, int c, int nocase) { if (nocase) { c = utf8_upper(c); } while (*string) { int ch; int n = reg_utf8_tounicode_case(string, &ch, nocase); if (c == ch) { return string; |
︙ | ︙ | |||
21230 21231 21232 21233 21234 21235 21236 | } save = preg->reginput; no = regrepeat(preg, scan + 5, max); if (no < min) { return 0; } if (matchmin) { | | | | | | | | | | | | | | | > > > > > | > > > > > > > | | | | | | | 21244 21245 21246 21247 21248 21249 21250 21251 21252 21253 21254 21255 21256 21257 21258 21259 21260 21261 21262 21263 21264 21265 21266 21267 21268 21269 21270 21271 21272 21273 21274 21275 21276 21277 21278 21279 21280 21281 21282 21283 21284 21285 21286 21287 21288 21289 21290 21291 21292 21293 21294 21295 21296 21297 21298 21299 21300 21301 21302 21303 21304 21305 21306 21307 21308 21309 21310 21311 21312 21313 21314 21315 21316 21317 21318 21319 21320 21321 21322 21323 21324 21325 21326 21327 21328 21329 21330 21331 21332 21333 21334 21335 21336 21337 21338 21339 21340 21341 21342 21343 21344 21345 21346 21347 21348 21349 21350 21351 21352 21353 21354 21355 21356 21357 21358 21359 21360 21361 21362 21363 21364 21365 21366 21367 21368 21369 21370 21371 21372 21373 21374 21375 21376 21377 21378 21379 21380 21381 21382 21383 21384 21385 21386 21387 21388 21389 21390 21391 21392 21393 21394 21395 21396 21397 21398 21399 21400 21401 21402 21403 21404 21405 21406 21407 | } save = preg->reginput; no = regrepeat(preg, scan + 5, max); if (no < min) { return 0; } if (matchmin) { max = no; no = min; } while (1) { if (matchmin) { if (no > max) { break; } } else { if (no < min) { break; } } preg->reginput = save + utf8_index(save, no); reg_utf8_tounicode_case(preg->reginput, &c, (preg->cflags & REG_ICASE)); if (reg_iseol(preg, nextch) || c == nextch) { if (regmatch(preg, next)) { return(1); } } if (matchmin) { no++; } else { no--; } } return(0); } static int regmatchrepeat(regex_t *preg, int scan, int matchmin) { int *scanpt = preg->program + scan; int max = scanpt[2]; int min = scanpt[3]; if (scanpt[4] < min) { scanpt[4]++; if (regmatch(preg, scan + 5)) { return 1; } scanpt[4]--; return 0; } if (scanpt[4] > max) { return 0; } if (matchmin) { if (regmatch(preg, regnext(preg, scan))) { return 1; } scanpt[4]++; if (regmatch(preg, scan + 5)) { return 1; } scanpt[4]--; return 0; } if (scanpt[4] < max) { scanpt[4]++; if (regmatch(preg, scan + 5)) { return 1; } scanpt[4]--; } return regmatch(preg, regnext(preg, scan)); } static int regmatch(regex_t *preg, int prog) { int scan; int next; const char *save; scan = prog; #ifdef DEBUG if (scan != 0 && regnarrate) fprintf(stderr, "%s(\n", regprop(scan)); #endif while (scan != 0) { int n; int c; #ifdef DEBUG if (regnarrate) { fprintf(stderr, "%3d: %s...\n", scan, regprop(OP(preg, scan))); } #endif next = regnext(preg, scan); n = reg_utf8_tounicode_case(preg->reginput, &c, (preg->cflags & REG_ICASE)); switch (OP(preg, scan)) { case BOLX: if ((preg->eflags & REG_NOTBOL)) { return(0); } case BOL: if (preg->reginput != preg->regbol) { return(0); } break; case EOLX: if (c != 0) { return 0; } break; case EOL: if (!reg_iseol(preg, c)) { return(0); } break; case WORDA: if ((!isalnum(UCHAR(c))) && c != '_') return(0); if (preg->reginput > preg->regbol && (isalnum(UCHAR(preg->reginput[-1])) || preg->reginput[-1] == '_')) return(0); break; case WORDZ: if (preg->reginput > preg->regbol) { if (reg_iseol(preg, c) || !isalnum(UCHAR(c)) || c != '_') { c = preg->reginput[-1]; if (isalnum(UCHAR(c)) || c == '_') { break; } } } return(0); case ANY: if (reg_iseol(preg, c)) return 0; preg->reginput += n; break; |
︙ | ︙ | |||
21407 21408 21409 21410 21411 21412 21413 | preg->reginput += n; break; case NOTHING: break; case BACK: break; case BRANCH: | | | | | | 21433 21434 21435 21436 21437 21438 21439 21440 21441 21442 21443 21444 21445 21446 21447 21448 21449 21450 21451 21452 21453 21454 21455 21456 21457 21458 21459 21460 21461 21462 21463 21464 21465 21466 21467 21468 21469 21470 21471 | preg->reginput += n; break; case NOTHING: break; case BACK: break; case BRANCH: if (OP(preg, next) != BRANCH) next = OPERAND(scan); else { do { save = preg->reginput; if (regmatch(preg, OPERAND(scan))) { return(1); } preg->reginput = save; scan = regnext(preg, scan); } while (scan != 0 && OP(preg, scan) == BRANCH); return(0); } break; case REP: case REPMIN: return regmatchsimplerepeat(preg, scan, OP(preg, scan) == REPMIN); case REPX: case REPXMIN: return regmatchrepeat(preg, scan, OP(preg, scan) == REPXMIN); case END: return 1; case OPENNC: case CLOSENC: return regmatch(preg, next); default: if (OP(preg, scan) >= OPEN+1 && OP(preg, scan) < CLOSE_END) { |
︙ | ︙ | |||
21478 21479 21480 21481 21482 21483 21484 | int ch; int n; scan = preg->reginput; opnd = OPERAND(p); switch (OP(preg, p)) { case ANY: | | | 21504 21505 21506 21507 21508 21509 21510 21511 21512 21513 21514 21515 21516 21517 21518 | int ch; int n; scan = preg->reginput; opnd = OPERAND(p); switch (OP(preg, p)) { case ANY: while (!reg_iseol(preg, *scan) && count < max) { count++; scan++; } break; case EXACTLY: while (count < max) { |
︙ | ︙ | |||
21514 21515 21516 21517 21518 21519 21520 | if (reg_iseol(preg, ch) || reg_range_find(preg->program + opnd, ch) != 0) { break; } count++; scan += n; } break; | | | | 21540 21541 21542 21543 21544 21545 21546 21547 21548 21549 21550 21551 21552 21553 21554 21555 21556 | if (reg_iseol(preg, ch) || reg_range_find(preg->program + opnd, ch) != 0) { break; } count++; scan += n; } break; default: preg->err = REG_ERR_INTERNAL; count = 0; break; } preg->reginput = scan; return(count); } |
︙ | ︙ | |||
21541 21542 21543 21544 21545 21546 21547 | return(p-offset); else return(p+offset); } static int regopsize(regex_t *preg, int p ) { | | | 21567 21568 21569 21570 21571 21572 21573 21574 21575 21576 21577 21578 21579 21580 21581 | return(p-offset); else return(p+offset); } static int regopsize(regex_t *preg, int p ) { switch (OP(preg, p)) { case REP: case REPMIN: case REPX: case REPXMIN: return 5; |
︙ | ︙ | |||
21662 21663 21664 21665 21666 21667 21668 | DIR *opendir(const char *name) { DIR *dir = 0; if (name && name[0]) { size_t base_length = strlen(name); | | | | | 21688 21689 21690 21691 21692 21693 21694 21695 21696 21697 21698 21699 21700 21701 21702 21703 21704 21705 21706 21707 21708 21709 21710 21711 21712 21713 21714 21715 21716 21717 | DIR *opendir(const char *name) { DIR *dir = 0; if (name && name[0]) { size_t base_length = strlen(name); const char *all = strchr("/\\", name[base_length - 1]) ? "*" : "/*"; if ((dir = (DIR *) Jim_Alloc(sizeof *dir)) != 0 && (dir->name = (char *)Jim_Alloc(base_length + strlen(all) + 1)) != 0) { strcat(strcpy(dir->name, name), all); if ((dir->handle = (long)_findfirst(dir->name, &dir->info)) != -1) dir->result.d_name = 0; else { Jim_Free(dir->name); Jim_Free(dir); dir = 0; } } else { Jim_Free(dir); dir = 0; errno = ENOMEM; } } else { errno = EINVAL; |
︙ | ︙ | |||
21699 21700 21701 21702 21703 21704 21705 | if (dir) { if (dir->handle != -1) result = _findclose(dir->handle); Jim_Free(dir->name); Jim_Free(dir); } | | | 21725 21726 21727 21728 21729 21730 21731 21732 21733 21734 21735 21736 21737 21738 21739 | if (dir) { if (dir->handle != -1) result = _findclose(dir->handle); Jim_Free(dir->name); Jim_Free(dir); } if (result == -1) errno = EBADF; return result; } struct dirent *readdir(DIR * dir) { struct dirent *result = 0; |
︙ | ︙ | |||
21727 21728 21729 21730 21731 21732 21733 | #endif #ifndef JIM_BOOTSTRAP_LIB_ONLY #include <errno.h> #include <string.h> #ifdef USE_LINENOISE | > | > | 21753 21754 21755 21756 21757 21758 21759 21760 21761 21762 21763 21764 21765 21766 21767 21768 21769 | #endif #ifndef JIM_BOOTSTRAP_LIB_ONLY #include <errno.h> #include <string.h> #ifdef USE_LINENOISE #ifdef HAVE_UNISTD_H #include <unistd.h> #endif #include "linenoise.h" #else #define MAX_LINE_LEN 512 #endif char *Jim_HistoryGetline(const char *prompt) { |
︙ | ︙ | |||
21780 21781 21782 21783 21784 21785 21786 | linenoiseHistorySave(filename); #endif } void Jim_HistoryShow(void) { #ifdef USE_LINENOISE | | | 21808 21809 21810 21811 21812 21813 21814 21815 21816 21817 21818 21819 21820 21821 21822 | linenoiseHistorySave(filename); #endif } void Jim_HistoryShow(void) { #ifdef USE_LINENOISE int i; int len; char **history = linenoiseHistory(&len); for (i = 0; i < len; i++) { printf("%4d %s\n", i + 1, history[i]); } #endif |
︙ | ︙ | |||
21815 21816 21817 21818 21819 21820 21821 | Jim_SetVariableStrWithStr(interp, JIM_INTERACTIVE, "1"); while (1) { Jim_Obj *scriptObjPtr; const char *result; int reslen; char prompt[20]; | < | | | | < < > < < < < | | | < | 21843 21844 21845 21846 21847 21848 21849 21850 21851 21852 21853 21854 21855 21856 21857 21858 21859 21860 21861 21862 21863 21864 21865 21866 21867 21868 21869 21870 21871 21872 21873 21874 21875 21876 21877 21878 21879 21880 21881 21882 21883 21884 21885 21886 21887 21888 21889 21890 21891 21892 21893 21894 21895 21896 21897 21898 21899 21900 21901 21902 21903 21904 21905 21906 21907 21908 21909 21910 21911 21912 21913 21914 | Jim_SetVariableStrWithStr(interp, JIM_INTERACTIVE, "1"); while (1) { Jim_Obj *scriptObjPtr; const char *result; int reslen; char prompt[20]; if (retcode != JIM_OK) { const char *retcodestr = Jim_ReturnCode(retcode); if (*retcodestr == '?') { snprintf(prompt, sizeof(prompt) - 3, "[%d] . ", retcode); } else { snprintf(prompt, sizeof(prompt) - 3, "[%s] . ", retcodestr); } } else { strcpy(prompt, ". "); } scriptObjPtr = Jim_NewStringObj(interp, "", 0); Jim_IncrRefCount(scriptObjPtr); while (1) { char state; char *line; line = Jim_HistoryGetline(prompt); if (line == NULL) { if (errno == EINTR) { continue; } Jim_DecrRefCount(interp, scriptObjPtr); retcode = JIM_OK; goto out; } if (Jim_Length(scriptObjPtr) != 0) { Jim_AppendString(interp, scriptObjPtr, "\n", 1); } Jim_AppendString(interp, scriptObjPtr, line, -1); free(line); if (Jim_ScriptIsComplete(interp, scriptObjPtr, &state)) break; snprintf(prompt, sizeof(prompt), "%c> ", state); } #ifdef USE_LINENOISE if (strcmp(Jim_String(scriptObjPtr), "h") == 0) { Jim_HistoryShow(); Jim_DecrRefCount(interp, scriptObjPtr); continue; } Jim_HistoryAdd(Jim_String(scriptObjPtr)); if (history_file) { Jim_HistorySave(history_file); } #endif retcode = Jim_EvalObj(interp, scriptObjPtr); Jim_DecrRefCount(interp, scriptObjPtr); if (retcode == JIM_EXIT) { break; } if (retcode == JIM_ERR) { Jim_MakeErrorMessage(interp); } result = Jim_GetString(Jim_GetResult(interp), &reslen); if (reslen) { |
︙ | ︙ | |||
21908 21909 21910 21911 21912 21913 21914 | extern int Jim_initjimshInit(Jim_Interp *interp); static void JimSetArgv(Jim_Interp *interp, int argc, char *const argv[]) { int n; Jim_Obj *listObj = Jim_NewListObj(interp, NULL, 0); | | > > > > > > > > > > > > > > > > > > > > > > | | > | | > > > | 21929 21930 21931 21932 21933 21934 21935 21936 21937 21938 21939 21940 21941 21942 21943 21944 21945 21946 21947 21948 21949 21950 21951 21952 21953 21954 21955 21956 21957 21958 21959 21960 21961 21962 21963 21964 21965 21966 21967 21968 21969 21970 21971 21972 21973 21974 21975 21976 21977 21978 21979 21980 21981 21982 21983 21984 21985 21986 21987 21988 21989 21990 21991 21992 21993 21994 21995 21996 21997 21998 21999 22000 22001 22002 22003 22004 22005 22006 22007 22008 22009 22010 22011 22012 22013 22014 22015 22016 22017 22018 22019 | extern int Jim_initjimshInit(Jim_Interp *interp); static void JimSetArgv(Jim_Interp *interp, int argc, char *const argv[]) { int n; Jim_Obj *listObj = Jim_NewListObj(interp, NULL, 0); for (n = 0; n < argc; n++) { Jim_Obj *obj = Jim_NewStringObj(interp, argv[n], -1); Jim_ListAppendElement(interp, listObj, obj); } Jim_SetVariableStr(interp, "argv", listObj); Jim_SetVariableStr(interp, "argc", Jim_NewIntObj(interp, argc)); } static void JimPrintErrorMessage(Jim_Interp *interp) { Jim_MakeErrorMessage(interp); fprintf(stderr, "%s\n", Jim_String(Jim_GetResult(interp))); } void usage(const char* executable_name) { printf("jimsh version %d.%d\n", JIM_VERSION / 100, JIM_VERSION % 100); printf("Usage: %s\n", executable_name); printf("or : %s [options] [filename]\n", executable_name); printf("\n"); printf("Without options: Interactive mode\n"); printf("\n"); printf("Options:\n"); printf(" --version : prints the version string\n"); printf(" --help : prints this text\n"); printf(" -e CMD : executes command CMD\n"); printf(" NOTE: all subsequent options will be passed as arguments to the command\n"); printf(" [filename] : executes the script contained in the named file\n"); printf(" NOTE: all subsequent options will be passed to the script\n\n"); } int main(int argc, char *const argv[]) { int retcode; Jim_Interp *interp; char *const orig_argv0 = argv[0]; if (argc > 1 && strcmp(argv[1], "--version") == 0) { printf("%d.%d\n", JIM_VERSION / 100, JIM_VERSION % 100); return 0; } else if (argc > 1 && strcmp(argv[1], "--help") == 0) { usage(argv[0]); return 0; } interp = Jim_CreateInterp(); Jim_RegisterCoreCommands(interp); if (Jim_InitStaticExtensions(interp) != JIM_OK) { JimPrintErrorMessage(interp); } Jim_SetVariableStrWithStr(interp, "jim::argv0", orig_argv0); Jim_SetVariableStrWithStr(interp, JIM_INTERACTIVE, argc == 1 ? "1" : "0"); retcode = Jim_initjimshInit(interp); if (argc == 1) { if (retcode == JIM_ERR) { JimPrintErrorMessage(interp); } if (retcode != JIM_EXIT) { JimSetArgv(interp, 0, NULL); retcode = Jim_InteractivePrompt(interp); } } else { if (argc > 2 && strcmp(argv[1], "-e") == 0) { JimSetArgv(interp, argc - 3, argv + 3); retcode = Jim_Eval(interp, argv[2]); if (retcode != JIM_ERR) { printf("%s\n", Jim_String(Jim_GetResult(interp))); } } else { |
︙ | ︙ |
Added autosetup/pkg-config.tcl.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 | # Copyright (c) 2016 WorkWare Systems http://www.workware.net.au/ # All rights reserved # @synopsis: # # The 'pkg-config' module allows package information to be found via pkg-config # # If not cross-compiling, the package path should be determined automatically # by pkg-config. # If cross-compiling, the default package path is the compiler sysroot. # If the C compiler doesn't support -print-sysroot, the path can be supplied # by the --sysroot option or by defining SYSROOT. # # PKG_CONFIG may be set to use an alternative to pkg-config use cc module-options { sysroot:dir => "Override compiler sysroot for pkg-config search path" } # @pkg-config-init ?required? # # Initialises the pkg-config system. Unless required is set to 0, # it is a fatal error if the pkg-config # This command will normally be called automatically as required, # but it may be invoked explicitly if lack of pkg-config is acceptable. # # Returns 1 if ok, or 0 if pkg-config not found/usable (only if required=0) # proc pkg-config-init {{required 1}} { if {[is-defined HAVE_PKG_CONFIG]} { return [get-define HAVE_PKG_CONFIG] } set found 0 define PKG_CONFIG [get-env PKG_CONFIG pkg-config] msg-checking "Checking for pkg-config..." try { set version [exec [get-define PKG_CONFIG] --version] msg-result $version define PKG_CONFIG_VERSION $version set found 1 if {[opt-val sysroot] ne ""} { define SYSROOT [file-normalize [opt-val sysroot]] msg-result "Using specified sysroot [get-define SYSROOT]" } elseif {[get-define build] ne [get-define host]} { if {[catch {exec-with-stderr [get-define CC] -print-sysroot} result errinfo] == 0} { # Use the compiler sysroot, if there is one define SYSROOT $result msg-result "Found compiler sysroot $result" } else { set msg "pkg-config: Cross compiling, but no compiler sysroot and no --sysroot supplied" if {$required} { user-error $msg } else { msg-result $msg } set found 0 } } if {[is-defined SYSROOT]} { set sysroot [get-define SYSROOT] # XXX: It's possible that these should be set only when invoking pkg-config global env set env(PKG_CONFIG_DIR) "" # Do we need to try /usr/local as well or instead? set env(PKG_CONFIG_LIBDIR) $sysroot/usr/lib/pkgconfig:$sysroot/usr/share/pkgconfig set env(PKG_CONFIG_SYSROOT_DIR) $sysroot } } on error msg { msg-result "[get-define PKG_CONFIG] (not found)" if {$required} { user-error "No usable pkg-config" } } define HAVE_PKG_CONFIG $found return $found } # @pkg-config module ?requirements? # # Use pkg-config to find the given module meeting the given requirements. # e.g. # ## pkg-config pango >= 1.37.0 # # If found, returns 1 and sets HAVE_PKG_PANGO to 1 along with: # ## PKG_PANGO_VERSION to the found version ## PKG_PANGO_LIBS to the required libs (--libs-only-l) ## PKG_PANGO_LDFLAGS to the required linker flags (--libs-only-L) ## PKG_PANGO_CFLAGS to the required compiler flags (--cflags) # # If not found, returns 0. # proc pkg-config {module args} { set ok [pkg-config-init] msg-checking "Checking for $module $args..." if {!$ok} { msg-result "no pkg-config" return 0 } try { set version [exec [get-define PKG_CONFIG] --modversion "$module $args"] msg-result $version set prefix [feature-define-name $module PKG_] define HAVE_${prefix} define ${prefix}_VERSION $version define ${prefix}_LIBS [exec pkg-config --libs-only-l $module] define ${prefix}_LDFLAGS [exec pkg-config --libs-only-L $module] define ${prefix}_CFLAGS [exec pkg-config --cflags $module] return 1 } on error msg { msg-result "not found" configlog "pkg-config --modversion $module $args: $msg" return 0 } } # @pkg-config-get module setting # # Convenience access to the results of pkg-config # # For example, [pkg-config-get pango CFLAGS] returns # the value of PKG_PANGO_CFLAGS, or "" if not defined. proc pkg-config-get {module name} { set prefix [feature-define-name $module PKG_] get-define ${prefix}_${name} "" } |
Changes to autosetup/system.tcl.
1 2 3 4 5 6 | # Copyright (c) 2010 WorkWare Systems http://www.workware.net.au/ # All rights reserved # @synopsis: # # This module supports common system interrogation and options | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 | # Copyright (c) 2010 WorkWare Systems http://www.workware.net.au/ # All rights reserved # @synopsis: # # This module supports common system interrogation and options # such as --host, --build, --prefix, and setting srcdir, builddir, and EXEEXT # # It also support the 'feature' naming convention, where searching # for a feature such as sys/type.h defines HAVE_SYS_TYPES_H # module-options { host:host-alias => {a complete or partial cpu-vendor-opsys for the system where the application will run (defaults to the same value as --build)} |
︙ | ︙ | |||
102 103 104 105 106 107 108 | # @make-template template ?outfile? # # Reads the input file <srcdir>/$template and writes the output file $outfile. # If $outfile is blank/omitted, $template should end with ".in" which # is removed to create the output file name. # | | | 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 | # @make-template template ?outfile? # # Reads the input file <srcdir>/$template and writes the output file $outfile. # If $outfile is blank/omitted, $template should end with ".in" which # is removed to create the output file name. # # Each pattern of the form @define@ is replaced with the corresponding # define, if it exists, or left unchanged if not. # # The special value @srcdir@ is substituted with the relative # path to the source directory from the directory where the output # file is created, while the special value @top_srcdir@ is substituted # with the relative path to the top level source directory. # |
︙ | ︙ |
Added autosetup/tmake.auto.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 | # Copyright (c) 2016 WorkWare Systems http://www.workware.net.au/ # All rights reserved # Auto-load module for 'tmake' build system integration use init autosetup_add_init_type tmake "Tcl-based tmake build system" { autosetup_check_create auto.def \ {# Initial auto.def created by 'autosetup --init=tmake' # vim:set syntax=tcl: use cc cc-lib cc-db cc-shared use tmake # Add any user options here # Really want a --configure that takes over the rest of the command line options { } cc-check-tools ar ranlib set objdir [get-env BUILDDIR objdir] make-config-header $objdir/include/autoconf.h make-tmake-settings $objdir/settings.conf {[A-Z]*} } autosetup_check_create project.spec \ {# Initial project.spec created by 'autosetup --init=tmake' # vim:set syntax=tcl: define? DESTDIR _install # XXX If configure creates additional/different files than include/autoconf.h # that should be reflected here # We use [set AUTOREMAKE] here to avoid rebuilding settings.conf # if the AUTOREMAKE command changes Depends {settings.conf include/autoconf.h} auto.def -msg {note Configuring...} -do { run [set AUTOREMAKE] >$build/config.out } -onerror {puts [readfile $build/config.out]} -fatal Clean config.out DistClean --source config.log DistClean settings.conf include/autoconf.h # If not configured, configure with default options # Note that it is expected that configure will normally be run # separately. This is just a convenience for a host build define? AUTOREMAKE configure TOPBUILDDIR=$TOPBUILDDIR --conf=auto.def Load settings.conf # e.g. for up autoconf.h IncludePaths include ifconfig CONFIGURED # Hmmm, but should we turn off AutoSubDirs? #AutoSubDirs off } if {![file exists build.spec]} { puts "Note: I don't see build.spec. Try running: tmake --genie" } } |
Added autosetup/tmake.tcl.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 | # Copyright (c) 2011 WorkWare Systems http://www.workware.net.au/ # All rights reserved # @synopsis: # # The 'tmake' module makes it easy to support the tmake build system. # # The following variables are set: # ## CONFIGURED - to indicate that the project is configured use system module-options {} define CONFIGURED # @make-tmake-settings outfile patterns ... # # Examines all defined variables which match the given patterns (defaults to "*") # and writes a tmake-compatible .conf file defining those variables. # For example, if ABC is "3 monkeys" and ABC matches a pattern, then the file will include: # ## define ABC {3 monkeys} # # If the file would be unchanged, it is not written. # # Typical usage is: # # make-tmake-settings [get-env BUILDDIR objdir]/settings.conf {[A-Z]*} proc make-tmake-settings {file args} { file mkdir [file dirname $file] set lines {} if {[llength $args] == 0} { set args * } foreach n [lsort [dict keys [all-defines]]] { foreach p $args { if {[string match $p $n]} { set value [get-define $n] lappend lines "define $n [list $value]" break } } } set buf [join $lines \n] write-if-changed $file $buf { msg-result "Created $file" } } |
Changes to skins/black_and_white/css.txt.
︙ | ︙ | |||
104 105 106 107 108 109 110 | text-align: center; border:1px solid #999; border-width:1px 0px; background-color: #eee; color: #333; } div.submenu a, div.submenu a:visited, div.sectionmenu>a.button:link, | | | | 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 | text-align: center; border:1px solid #999; border-width:1px 0px; background-color: #eee; color: #333; } div.submenu a, div.submenu a:visited, div.sectionmenu>a.button:link, div.sectionmenu>a.button:visited, div.submenu label { padding: 3px 10px 3px 10px; color: #333; text-decoration: none; } div.submenu a:hover, div.sectionmenu>a.button:hover, div.submenu label:hover { color: #eee; background-color: #333; } /* All page content from the bottom of the menu or submenu down to ** the footer */ div.content { |
︙ | ︙ |
Changes to skins/blitz/css.txt.
︙ | ︙ | |||
892 893 894 895 896 897 898 | border-bottom: 1px solid #ddd; } .submenu input, .submenu select { margin: 0 0 0 5px; } | | | | 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 | border-bottom: 1px solid #ddd; } .submenu input, .submenu select { margin: 0 0 0 5px; } .submenu a, .submenu label { color: #3b5c6b; padding: 5px 15px; text-decoration: none; border: 1px solid transparent; border-radius: 5px; } .submenu a:hover, .submenu label:hover { border: 1px solid #ccc; } /* Section * Cap/header to distinguish a section. Displayed within a content div. ––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– */ |
︙ | ︙ |
Changes to skins/blitz/footer.txt.
1 2 3 4 5 | </div> <!-- end div container --> </div> <!-- end div middle max-full-width --> <div class="footer"> <div class="container"> <div class="pull-right"> | | | 1 2 3 4 5 6 7 8 9 10 11 12 | </div> <!-- end div container --> </div> <!-- end div middle max-full-width --> <div class="footer"> <div class="container"> <div class="pull-right"> <a href="https://www.fossil-scm.org/">Fossil $release_version $manifest_version $manifest_date</a> </div> This page was generated in about <th1>puts [expr {([utime]+[stime]+1000)/1000*0.001}]</th1>s </div> </div> </body> </html> |
Changes to skins/blitz_no_logo/css.txt.
︙ | ︙ | |||
892 893 894 895 896 897 898 | border-bottom: 1px solid #ddd; } .submenu input, .submenu select { margin: 0 0 0 5px; } | | | | 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 | border-bottom: 1px solid #ddd; } .submenu input, .submenu select { margin: 0 0 0 5px; } .submenu a, .submenu label { color: #3b5c6b; padding: 5px 15px; text-decoration: none; border: 1px solid transparent; border-radius: 5px; } .submenu a:hover, .submenu label:hover { border: 1px solid #ccc; } /* Section * Cap/header to distinguish a section. Displayed within a content div. ––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– */ |
︙ | ︙ |
Changes to skins/blitz_no_logo/footer.txt.
1 2 3 4 5 | </div> <!-- end div container --> </div> <!-- end div middle max-full-width --> <div class="footer"> <div class="container"> <div class="pull-right"> | | | 1 2 3 4 5 6 7 8 9 10 11 12 | </div> <!-- end div container --> </div> <!-- end div middle max-full-width --> <div class="footer"> <div class="container"> <div class="pull-right"> <a href="https://www.fossil-scm.org/">Fossil $release_version $manifest_version $manifest_date</a> </div> This page was generated in about <th1>puts [expr {([utime]+[stime]+1000)/1000*0.001}]</th1>s </div> </div> </body> </html> |
Changes to skins/default/css.txt.
︙ | ︙ | |||
102 103 104 105 106 107 108 | .submenu { font-size: .7em; margin-top: 10px; padding: 10px; border-bottom: 1px solid #ccc; } | | | | 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 | .submenu { font-size: .7em; margin-top: 10px; padding: 10px; border-bottom: 1px solid #ccc; } .submenu a, .submenu label { padding: 10px 11px; text-decoration:none; color: #777; } .submenu a:hover, .submenu label:hover { padding: 6px 10px; border: 1px solid #ccc; border-radius: 5px; color: #000; } .content { |
︙ | ︙ |
Changes to skins/eagle/css.txt.
︙ | ︙ | |||
71 72 73 74 75 76 77 | font-size: 0.9em; font-weight: bold; text-align: center; background-color: #485D7B; color: white; } div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited, | | > | > | 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 | font-size: 0.9em; font-weight: bold; text-align: center; background-color: #485D7B; color: white; } div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited, div.sectionmenu>a.button:link, div.sectionmenu>a.button:visited, div.submenu label { padding: 3px 10px 3px 10px; color: white; text-decoration: none; } div.mainmenu a:hover, div.submenu a:hover, div.sectionmenu>a.button:hover, div.submenu label:hover { text-decoration: underline; } /* All page content from the bottom of the menu or submenu down to ** the footer */ div.content { padding: 0ex 1ex 0ex 2ex; |
︙ | ︙ |
Changes to skins/eagle/footer.txt.
1 2 3 4 | <div class="footer"> <th1> proc getTclVersion {} { if {[catch {tclEval info patchlevel} tclVersion] == 0} { | | | 1 2 3 4 5 6 7 8 9 10 11 12 | <div class="footer"> <th1> proc getTclVersion {} { if {[catch {tclEval info patchlevel} tclVersion] == 0} { return "<a href=\"https://www.tcl.tk/\">Tcl</a> version $tclVersion" } return "" } proc getVersion { version } { set length [string length $version] return [string range $version 1 [expr {$length - 2}]] } |
︙ | ︙ |
Changes to skins/enhanced1/css.txt.
︙ | ︙ | |||
69 70 71 72 73 74 75 | padding: 3px 10px 3px 0px; font-size: 0.9em; text-align: center; background-color: #456878; color: white; } div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited, | | > | > | 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 | padding: 3px 10px 3px 0px; font-size: 0.9em; text-align: center; background-color: #456878; color: white; } div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited, div.sectionmenu>a.button:link, div.sectionmenu>a.button:visited, div.submenu label { padding: 3px 10px 3px 10px; color: white; text-decoration: none; } div.mainmenu a:hover, div.submenu a:hover, div.sectionmenu>a.button:hover, div.submenu label:hover { color: #558195; background-color: white; } /* All page content from the bottom of the menu or submenu down to ** the footer */ div.content { |
︙ | ︙ |
Changes to skins/enhanced1/footer.txt.
1 2 3 4 | <div class="footer"> <th1> proc getTclVersion {} { if {[catch {tclEval info patchlevel} tclVersion] == 0} { | | | 1 2 3 4 5 6 7 8 9 10 11 12 | <div class="footer"> <th1> proc getTclVersion {} { if {[catch {tclEval info patchlevel} tclVersion] == 0} { return "<a href=\"https://www.tcl.tk/\">Tcl</a> version $tclVersion" } return "" } proc getVersion { version } { set length [string length $version] return [string range $version 1 [expr {$length - 2}]] } |
︙ | ︙ |
Changes to skins/khaki/css.txt.
︙ | ︙ | |||
67 68 69 70 71 72 73 | padding: 3px 10px 3px 0px; font-size: 0.9em; text-align: center; background-color: #c0af58; color: white; } div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited, | | > | > | 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | padding: 3px 10px 3px 0px; font-size: 0.9em; text-align: center; background-color: #c0af58; color: white; } div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited, div.sectionmenu>a.button:link, div.sectionmenu>a.button:visited, div.submenu label { padding: 3px 10px 3px 10px; color: white; text-decoration: none; } div.mainmenu a:hover, div.submenu a:hover, div.sectionmenu>a.button:hover div.submenu label:hover { color: #a09048; background-color: white; } /* All page content from the bottom of the menu or submenu down to ** the footer */ div.content { |
︙ | ︙ |
Changes to skins/original/css.txt.
︙ | ︙ | |||
69 70 71 72 73 74 75 | padding: 3px 10px 3px 0px; font-size: 0.9em; text-align: center; background-color: #456878; color: white; } div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited, | | > | > | 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 | padding: 3px 10px 3px 0px; font-size: 0.9em; text-align: center; background-color: #456878; color: white; } div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited, div.sectionmenu>a.button:link, div.sectionmenu>a.button:visited, div.submenu label { padding: 3px 10px 3px 10px; color: white; text-decoration: none; } div.mainmenu a:hover, div.submenu a:hover, div.sectionmenu>a.button:hover, div.submenu label:hover { color: #558195; background-color: white; } /* All page content from the bottom of the menu or submenu down to ** the footer */ div.content { |
︙ | ︙ |
Changes to skins/plain_gray/css.txt.
︙ | ︙ | |||
69 70 71 72 73 74 75 | padding: 3px 10px 3px 0px; font-size: 0.9em; text-align: center; background-color: #606060; color: white; } div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited, | | > | > | 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 | padding: 3px 10px 3px 0px; font-size: 0.9em; text-align: center; background-color: #606060; color: white; } div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited, div.sectionmenu>a.button:link, div.sectionmenu>a.button:visited, div.submenu label { padding: 3px 10px 3px 10px; color: white; text-decoration: none; } div.mainmenu a:hover, div.submenu a:hover, div.sectionmenu>a.button:hover, div.submenu label:hover { color: #404040; background-color: white; } /* All page content from the bottom of the menu or submenu down to ** the footer */ div.content { |
︙ | ︙ |
Changes to skins/rounded1/css.txt.
︙ | ︙ | |||
82 83 84 85 86 87 88 | box-shadow: 0px 3px 4px #999; } div.mainmenu a, div.mainmenu a:visited { padding: 3px 10px 3px 10px; color: white; text-decoration: none; } | | | | 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 | box-shadow: 0px 3px 4px #999; } div.mainmenu a, div.mainmenu a:visited { padding: 3px 10px 3px 10px; color: white; text-decoration: none; } div.submenu a, div.submenu a:visited, a.button, div.submenu label div.sectionmenu>a.button:link, div.sectionmenu>a.button:visited { padding: 2px 8px; color: #000; font-family: Arial; text-decoration: none; margin:auto; border-radius: 5px; background-color: #e0e0e0; text-shadow: 0px -1px 0px #eee; border: 1px solid #000; } div.mainmenu a:hover { color: #000; background-color: white; } div.submenu a:hover, div.sectionmenu>a.button:hover, div.submenu label:hover { background-color: #c0c0c0; } /* All page content from the bottom of the menu or submenu down to ** the footer */ div.content { background-color: #fff; |
︙ | ︙ |
Changes to skins/xekri/css.txt.
︙ | ︙ | |||
156 157 158 159 160 161 162 | div.submenu { border-top: 1px solid #0a0; border-radius: 0; display: block; } | | | | 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 | div.submenu { border-top: 1px solid #0a0; border-radius: 0; display: block; } div.mainmenu a, div.submenu a, div.submenu label { color: #000; padding: 0 0.75rem; text-decoration: none; } div.mainmenu a:hover, div.submenu a:hover, div.submenu label:hover { color: #fff; text-shadow: 0px 0px 6px #0f0; } div.submenu * { margin: 0 0.5rem; vertical-align: middle; |
︙ | ︙ |
Changes to src/add.c.
︙ | ︙ | |||
69 70 71 72 73 74 75 | ** entries should be removed. 2012-02-04 */ ".fos", ".fos-journal", ".fos-wal", ".fos-shm", }; | | | | > > > | | > > > > | > > > > > > | | | 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 | ** entries should be removed. 2012-02-04 */ ".fos", ".fos-journal", ".fos-wal", ".fos-shm", }; /* Possible names of auxiliary files generated when the "manifest" property ** is used */ static const struct { const char *fname; int flg; }aManifestflags[] = { { "manifest", MFESTFLG_RAW }, { "manifest.uuid", MFESTFLG_UUID }, { "manifest.tags", MFESTFLG_TAGS } }; static const char *azManifests[3]; /* ** Names of repository files, if they exist in the checkout. */ static const char *azRepo[4] = { 0, 0, 0, 0 }; /* Cached setting "manifest" */ static int cachedManifest = -1; static int numManifests; if( cachedManifest == -1 ){ int i; Blob repo; cachedManifest = db_get_manifest_setting(); numManifests = 0; for(i=0; i<count(aManifestflags); i++){ if( cachedManifest&aManifestflags[i].flg ) { azManifests[numManifests++] = aManifestflags[i].fname; } } blob_zero(&repo); if( file_tree_name(g.zRepositoryName, &repo, 0, 0) ){ const char *zRepo = blob_str(&repo); azRepo[0] = zRepo; azRepo[1] = mprintf("%s-journal", zRepo); azRepo[2] = mprintf("%s-wal", zRepo); azRepo[3] = mprintf("%s-shm", zRepo); } } if( N<0 ) return 0; if( N<count(azName) ) return azName[N]; N -= count(azName); if( cachedManifest ){ if( N<numManifests ) return azManifests[N]; N -= numManifests; } if( !omitRepo && N<count(azRepo) ) return azRepo[N]; return 0; } /* ** Return a list of all reserved filenames as an SQL list. |
︙ | ︙ | |||
211 212 213 214 215 216 217 | zRepo = blob_str(&repoName); } if( filenames_are_case_sensitive() ){ xCmp = fossil_strcmp; }else{ xCmp = fossil_stricmp; } | | | 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 | zRepo = blob_str(&repoName); } if( filenames_are_case_sensitive() ){ xCmp = fossil_strcmp; }else{ xCmp = fossil_stricmp; } db_prepare(&loop, "SELECT pathname FROM sfile ORDER BY pathname"); while( db_step(&loop)==SQLITE_ROW ){ const char *zToAdd = db_column_text(&loop, 0); if( fossil_strcmp(zToAdd, zRepo)==0 ) continue; for(i=0; (zReserved = fossil_reserved_name(i, 0))!=0; i++){ if( xCmp(zToAdd, zReserved)==0 ) break; } if( zReserved ) continue; |
︙ | ︙ | |||
292 293 294 295 296 297 298 | } if( zIgnoreFlag==0 ){ zIgnoreFlag = db_get("ignore-glob", 0); } if( db_get_boolean("dotfiles", 0) ) scanFlags |= SCAN_ALL; vid = db_lget_int("checkout",0); db_begin_transaction(); | | | 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 | } if( zIgnoreFlag==0 ){ zIgnoreFlag = db_get("ignore-glob", 0); } if( db_get_boolean("dotfiles", 0) ) scanFlags |= SCAN_ALL; vid = db_lget_int("checkout",0); db_begin_transaction(); db_multi_exec("CREATE TEMP TABLE sfile(pathname TEXT PRIMARY KEY %s)", filename_collation()); pClean = glob_create(zCleanFlag); pIgnore = glob_create(zIgnoreFlag); nRoot = strlen(g.zLocalRoot); /* Load the names of all files that are to be added into sfile temp table */ for(i=2; i<g.argc; i++){ |
︙ | ︙ | |||
334 335 336 337 338 339 340 | forceFlag = 1; }else if( cReply!='y' && cReply!='Y' ){ blob_reset(&fullName); continue; } } db_multi_exec( | | | 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 | forceFlag = 1; }else if( cReply!='y' && cReply!='Y' ){ blob_reset(&fullName); continue; } } db_multi_exec( "INSERT OR IGNORE INTO sfile(pathname) VALUES(%Q)", zTreeName ); } blob_reset(&fullName); } glob_free(pIgnore); glob_free(pClean); |
︙ | ︙ | |||
383 384 385 386 387 388 389 | ** ** The temporary table "fremove" is dropped after being processed. */ static void process_files_to_remove( int dryRunFlag /* Zero to actually operate on the file-system. */ ){ Stmt remove; | | | 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 | ** ** The temporary table "fremove" is dropped after being processed. */ static void process_files_to_remove( int dryRunFlag /* Zero to actually operate on the file-system. */ ){ Stmt remove; if( db_table_exists("temp", "fremove") ){ db_prepare(&remove, "SELECT x FROM fremove ORDER BY x;"); while( db_step(&remove)==SQLITE_ROW ){ const char *zOldName = db_column_text(&remove, 0); if( !dryRunFlag ){ file_delete(zOldName); } fossil_print("DELETED_FILE %s\n", zOldName); |
︙ | ︙ | |||
459 460 461 462 463 464 465 | }else{ #if FOSSIL_ENABLE_LEGACY_MV_RM removeFiles = db_get_boolean("mv-rm-files",0); #else removeFiles = FOSSIL_MV_RM_FILE; #endif } | | | | 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 | }else{ #if FOSSIL_ENABLE_LEGACY_MV_RM removeFiles = db_get_boolean("mv-rm-files",0); #else removeFiles = FOSSIL_MV_RM_FILE; #endif } db_multi_exec("CREATE TEMP TABLE sfile(pathname TEXT PRIMARY KEY %s)", filename_collation()); for(i=2; i<g.argc; i++){ Blob treeName; char *zTreeName; file_tree_name(g.argv[i], &treeName, 0, 1); zTreeName = blob_str(&treeName); db_multi_exec( "INSERT OR IGNORE INTO sfile" " SELECT pathname FROM vfile" " WHERE (pathname=%Q %s" " OR (pathname>'%q/' %s AND pathname<'%q0' %s))" " AND NOT deleted", zTreeName, filename_collation(), zTreeName, filename_collation(), zTreeName, filename_collation() ); blob_reset(&treeName); } db_prepare(&loop, "SELECT pathname FROM sfile"); while( db_step(&loop)==SQLITE_ROW ){ fossil_print("DELETED %s\n", db_column_text(&loop, 0)); if( removeFiles ) add_file_to_remove(db_column_text(&loop, 0)); } db_finalize(&loop); if( !dryRunFlag ){ db_multi_exec( |
︙ | ︙ | |||
548 549 550 551 552 553 554 | #else caseSensitive = 1; /* Unix */ #endif caseSensitive = db_get_boolean("case-sensitive",caseSensitive); } if( !caseSensitive && g.localOpen ){ db_multi_exec( | | | < | 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 | #else caseSensitive = 1; /* Unix */ #endif caseSensitive = db_get_boolean("case-sensitive",caseSensitive); } if( !caseSensitive && g.localOpen ){ db_multi_exec( "CREATE INDEX IF NOT EXISTS localdb.vfile_nocase" " ON vfile(pathname COLLATE nocase)" ); } } return caseSensitive; } /* |
︙ | ︙ | |||
656 657 658 659 660 661 662 | /* step 1: ** Populate the temp table "sfile" with the names of all unmanaged ** files currently in the check-out, except for files that match the ** --ignore or ignore-glob patterns and dot-files. Then add all of ** the files in the sfile temp table to the set of managed files. */ | | | 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 | /* step 1: ** Populate the temp table "sfile" with the names of all unmanaged ** files currently in the check-out, except for files that match the ** --ignore or ignore-glob patterns and dot-files. Then add all of ** the files in the sfile temp table to the set of managed files. */ db_multi_exec("CREATE TEMP TABLE sfile(pathname TEXT PRIMARY KEY %s)", filename_collation()); n = strlen(g.zLocalRoot); blob_init(&path, g.zLocalRoot, n-1); /* now we read the complete file structure into a temp table */ pClean = glob_create(zCleanFlag); pIgnore = glob_create(zIgnoreFlag); vfile_scan(&path, blob_size(&path), scanFlags, pClean, pIgnore); |
︙ | ︙ | |||
777 778 779 780 781 782 783 | ** ** The temporary table "fmove" is dropped after being processed. */ static void process_files_to_move( int dryRunFlag /* Zero to actually operate on the file-system. */ ){ Stmt move; | | | 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 | ** ** The temporary table "fmove" is dropped after being processed. */ static void process_files_to_move( int dryRunFlag /* Zero to actually operate on the file-system. */ ){ Stmt move; if( db_table_exists("temp", "fmove") ){ db_prepare(&move, "SELECT x, y FROM fmove ORDER BY x;"); while( db_step(&move)==SQLITE_ROW ){ const char *zOldName = db_column_text(&move, 0); const char *zNewName = db_column_text(&move, 1); if( !dryRunFlag ){ int isOldDir = file_isdir(zOldName); if( isOldDir==1 ){ |
︙ | ︙ |
Changes to src/allrepo.c.
︙ | ︙ | |||
49 50 51 52 53 54 55 | /* ** Build a string that contains all of the command-line options ** specified as arguments. If the option name begins with "+" then ** it takes an argument. Without the "+" it does not. */ static void collect_argument(Blob *pExtra, const char *zArg, const char *zShort){ | | > | | 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 | /* ** Build a string that contains all of the command-line options ** specified as arguments. If the option name begins with "+" then ** it takes an argument. Without the "+" it does not. */ static void collect_argument(Blob *pExtra, const char *zArg, const char *zShort){ const char *z = find_option(zArg, zShort, 0); if( z!=0 ){ blob_appendf(pExtra, " %s", z); } } static void collect_argument_value(Blob *pExtra, const char *zArg){ const char *zValue = find_option(zArg, 0, 1); if( zValue ){ if( zValue[0] ){ blob_appendf(pExtra, " --%s %s", zArg, zValue); |
︙ | ︙ | |||
136 137 138 139 140 141 142 | ** unset conjunction with the "max-loadavg" setting which cannot ** otherwise be set globally. ** ** In addition, the following maintenance operations are supported: ** ** add Add all the repositories named to the set of repositories ** tracked by Fossil. Normally Fossil is able to keep up with | | | 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 | ** unset conjunction with the "max-loadavg" setting which cannot ** otherwise be set globally. ** ** In addition, the following maintenance operations are supported: ** ** add Add all the repositories named to the set of repositories ** tracked by Fossil. Normally Fossil is able to keep up with ** this list by itself, but sometimes it can benefit from this ** hint if you rename repositories. ** ** ignore Arguments are repositories that should be ignored by ** subsequent clean, extras, list, pull, push, rebuild, and ** sync operations. The -c|--ckout option causes the listed ** local checkouts to be ignored instead. ** |
︙ | ︙ | |||
170 171 172 173 174 175 176 | char *zQFilename; Blob extra; int useCheckouts = 0; int quiet = 0; int dryRunFlag = 0; int showFile = find_option("showfile",0,0)!=0; int stopOnError = find_option("dontstop",0,0)==0; | < | 171 172 173 174 175 176 177 178 179 180 181 182 183 184 | char *zQFilename; Blob extra; int useCheckouts = 0; int quiet = 0; int dryRunFlag = 0; int showFile = find_option("showfile",0,0)!=0; int stopOnError = find_option("dontstop",0,0)==0; int nToDel = 0; int showLabel = 0; dryRunFlag = find_option("dry-run","n",0)!=0; if( !dryRunFlag ){ dryRunFlag = find_option("test",0,0)!=0; /* deprecated */ } |
︙ | ︙ | |||
269 270 271 272 273 274 275 276 277 278 279 280 281 282 | collect_argv(&extra, 3); }else if( strncmp(zCmd, "fts-config", n)==0 ){ zCmd = "fts-config -R"; collect_argv(&extra, 3); }else if( strncmp(zCmd, "sync", n)==0 ){ zCmd = "sync -autourl -R"; collect_argument(&extra, "verbose","v"); }else if( strncmp(zCmd, "test-integrity", n)==0 ){ collect_argument(&extra, "parse", 0); zCmd = "test-integrity"; }else if( strncmp(zCmd, "test-orphans", n)==0 ){ zCmd = "test-orphans -R"; }else if( strncmp(zCmd, "test-missing", n)==0 ){ zCmd = "test-missing -q -R"; | > | 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 | collect_argv(&extra, 3); }else if( strncmp(zCmd, "fts-config", n)==0 ){ zCmd = "fts-config -R"; collect_argv(&extra, 3); }else if( strncmp(zCmd, "sync", n)==0 ){ zCmd = "sync -autourl -R"; collect_argument(&extra, "verbose","v"); collect_argument(&extra, "unversioned","u"); }else if( strncmp(zCmd, "test-integrity", n)==0 ){ collect_argument(&extra, "parse", 0); zCmd = "test-integrity"; }else if( strncmp(zCmd, "test-orphans", n)==0 ){ zCmd = "test-orphans -R"; }else if( strncmp(zCmd, "test-missing", n)==0 ){ zCmd = "test-missing -q -R"; |
︙ | ︙ | |||
371 372 373 374 375 376 377 | "INSERT INTO repolist " "SELECT DISTINCT substr(name, 6), name COLLATE nocase" " FROM global_config" " WHERE substr(name, 1, 5)=='repo:'" " ORDER BY 1" ); } | | > | | 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 | "INSERT INTO repolist " "SELECT DISTINCT substr(name, 6), name COLLATE nocase" " FROM global_config" " WHERE substr(name, 1, 5)=='repo:'" " ORDER BY 1" ); } db_multi_exec("CREATE TEMP TABLE toDel(x TEXT)"); db_prepare(&q, "SELECT name, tag FROM repolist ORDER BY 1"); while( db_step(&q)==SQLITE_ROW ){ int rc; const char *zFilename = db_column_text(&q, 0); #if !USE_SEE if( sqlite3_strglob("*.efossil", zFilename)==0 ) continue; #endif if( file_access(zFilename, F_OK) || !file_is_canonical(zFilename) || (useCheckouts && file_isdir(zFilename)!=1) ){ db_multi_exec("INSERT INTO toDel VALUES(%Q)", db_column_text(&q, 1)); nToDel++; continue; } if( zCmd[0]=='l' ){ fossil_print("%s\n", zFilename); continue; }else if( showFile ){ |
︙ | ︙ |
Changes to src/attach.c.
︙ | ︙ | |||
24 25 26 27 28 29 30 | /* ** WEBPAGE: attachlist ** List attachments. ** ** tkt=TICKETUUID ** page=WIKIPAGE ** | | | 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 | /* ** WEBPAGE: attachlist ** List attachments. ** ** tkt=TICKETUUID ** page=WIKIPAGE ** ** At most one of technote=, tkt= or page= are supplied. ** If none is given, all attachments are listed. If one is given, ** only attachments for the designated technote, ticket or wiki page ** are shown. TECHNOTEUUID and TICKETUUID may be just a prefix of the ** relevant technical note or ticket, in which case all attachments ** of all technical notes or tickets with the prefix will be listed. */ void attachlist_page(void){ |
︙ | ︙ | |||
84 85 86 87 88 89 90 | const char *zSrc = db_column_text(&q, 1); const char *zTarget = db_column_text(&q, 2); const char *zFilename = db_column_text(&q, 3); const char *zComment = db_column_text(&q, 4); const char *zUser = db_column_text(&q, 5); const char *zUuid = db_column_text(&q, 6); int attachid = db_column_int(&q, 7); | | | 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 | const char *zSrc = db_column_text(&q, 1); const char *zTarget = db_column_text(&q, 2); const char *zFilename = db_column_text(&q, 3); const char *zComment = db_column_text(&q, 4); const char *zUser = db_column_text(&q, 5); const char *zUuid = db_column_text(&q, 6); int attachid = db_column_int(&q, 7); /* type 0 is a wiki page, 1 is a ticket, 2 is a tech note */ int type = db_column_int(&q, 8); const char *zDispUser = zUser && zUser[0] ? zUser : "anonymous"; int i; char *zUrlTail; for(i=0; zFilename[i]; i++){ if( zFilename[i]=='/' && zFilename[i+1]!=0 ){ zFilename = &zFilename[i+1]; |
︙ | ︙ | |||
107 108 109 110 111 112 113 | zUrlTail = mprintf("page=%t&file=%t", zTarget, zFilename); } @ <li><p> @ Attachment %z(href("%R/ainfo/%!S",zUuid))%S(zUuid)</a> if( moderation_pending(attachid) ){ @ <span class="modpending">*** Awaiting Moderator Approval ***</span> } | | | 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 | zUrlTail = mprintf("page=%t&file=%t", zTarget, zFilename); } @ <li><p> @ Attachment %z(href("%R/ainfo/%!S",zUuid))%S(zUuid)</a> if( moderation_pending(attachid) ){ @ <span class="modpending">*** Awaiting Moderator Approval ***</span> } @ <br /><a href="%R/attachview?%s(zUrlTail)">%h(zFilename)</a> @ [<a href="%R/attachdownload/%t(zFilename)?%s(zUrlTail)">download</a>]<br /> if( zComment ) while( fossil_isspace(zComment[0]) ) zComment++; if( zComment && zComment[0] ){ @ %!W(zComment)<br /> } if( zPage==0 && zTkt==0 && zTechNote==0 ){ if( zSrc==0 || zSrc[0]==0 ){ |
︙ | ︙ | |||
356 357 358 359 360 361 362 | zTechNote = db_text(0, "SELECT substr(tagname,7) FROM tag" " WHERE tagname GLOB 'event-%q*'", zTechNote); if( zTechNote==0) fossil_redirect_home(); } zTarget = zTechNote; zTargetType = mprintf("Tech Note <a href=\"%R/technote/%s\">%S</a>", zTechNote, zTechNote); | | | 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 | zTechNote = db_text(0, "SELECT substr(tagname,7) FROM tag" " WHERE tagname GLOB 'event-%q*'", zTechNote); if( zTechNote==0) fossil_redirect_home(); } zTarget = zTechNote; zTargetType = mprintf("Tech Note <a href=\"%R/technote/%s\">%S</a>", zTechNote, zTechNote); }else{ if( g.perm.ApndTkt==0 || g.perm.Attach==0 ){ login_needed(g.anon.ApndTkt && g.anon.Attach); return; } if( !db_exists("SELECT 1 FROM tag WHERE tagname='tkt-%q'", zTkt) ){ zTkt = db_text(0, "SELECT substr(tagname,5) FROM tag" |
︙ | ︙ | |||
450 451 452 453 454 455 456 | if( rid==0 ){ fossil_redirect_home(); } zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", rid); #if 0 /* Shunning here needs to get both the attachment control artifact and ** the object that is attached. */ if( g.perm.Admin ){ if( db_exists("SELECT 1 FROM shun WHERE uuid='%q'", zUuid) ){ | | | | | | | | 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 | if( rid==0 ){ fossil_redirect_home(); } zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", rid); #if 0 /* Shunning here needs to get both the attachment control artifact and ** the object that is attached. */ if( g.perm.Admin ){ if( db_exists("SELECT 1 FROM shun WHERE uuid='%q'", zUuid) ){ style_submenu_element("Unshun", "%s/shun?uuid=%s&sub=1", g.zTop, zUuid); }else{ style_submenu_element("Shun", "%s/shun?shun=%s#addshun", g.zTop, zUuid); } } #endif pAttach = manifest_get(rid, CFTYPE_ATTACHMENT, 0); if( pAttach==0 ) fossil_redirect_home(); zTarget = pAttach->zAttachTarget; zSrc = pAttach->zAttachSrc; ridSrc = db_int(0,"SELECT rid FROM blob WHERE uuid='%q'", zSrc); zName = pAttach->zAttachName; zDesc = pAttach->zComment; zMime = mimetype_from_name(zName); fShowContent = zMime ? strncmp(zMime,"text/", 5)==0 : 0; if( validate16(zTarget, strlen(zTarget)) && db_exists("SELECT 1 FROM ticket WHERE tkt_uuid='%q'", zTarget) ){ zTktUuid = zTarget; if( !g.perm.RdTkt ){ login_needed(g.anon.RdTkt); return; } if( g.perm.WrTkt ){ style_submenu_element("Delete", "%R/ainfo/%s?del", zUuid); } }else if( db_exists("SELECT 1 FROM tag WHERE tagname='wiki-%q'",zTarget) ){ zWikiName = zTarget; if( !g.perm.RdWiki ){ login_needed(g.anon.RdWiki); return; } if( g.perm.WrWiki ){ style_submenu_element("Delete", "%R/ainfo/%s?del", zUuid); } }else if( db_exists("SELECT 1 FROM tag WHERE tagname='event-%q'",zTarget) ){ zTNUuid = zTarget; if( !g.perm.RdWiki ){ login_needed(g.anon.RdWiki); return; } if( g.perm.Write && g.perm.WrWiki ){ style_submenu_element("Delete", "%R/ainfo/%s?del", zUuid); } } zDate = db_text(0, "SELECT datetime(%.12f)", pAttach->rDate); if( P("confirm") && ((zTktUuid && g.perm.WrTkt) || (zWikiName && g.perm.WrWiki) || (zTNUuid && g.perm.Write && g.perm.WrWiki)) ){ int i, n, rid; char *zDate; Blob manifest; Blob cksum; |
︙ | ︙ | |||
549 550 551 552 553 554 555 | return; } if( strcmp(zModAction,"approve")==0 ){ moderation_approve(rid); } } style_header("Attachment Details"); | | | < | 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 | return; } if( strcmp(zModAction,"approve")==0 ){ moderation_approve(rid); } } style_header("Attachment Details"); style_submenu_element("Raw", "%R/artifact/%s", zUuid); if(fShowContent){ style_submenu_element("Line Numbers", "%R/ainfo/%s%s", zUuid, ((zLn&&*zLn) ? "" : "?ln=0")); } @ <div class="section">Overview</div> @ <p><table class="label-value"> @ <tr><th>Artifact ID:</th> @ <td>%z(href("%R/artifact/%!S",zUuid))%s(zUuid)</a> |
︙ | ︙ | |||
625 626 627 628 629 630 631 | }else{ @ <pre> @ %h(z) @ </pre> } }else if( strncmp(zMime, "image/", 6)==0 ){ int sz = db_int(0, "SELECT size FROM blob WHERE rid=%d", ridSrc); | | | | 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 | }else{ @ <pre> @ %h(z) @ </pre> } }else if( strncmp(zMime, "image/", 6)==0 ){ int sz = db_int(0, "SELECT size FROM blob WHERE rid=%d", ridSrc); @ <i>(file is %d(sz) bytes of image data)</i><br /> @ <img src="%R/raw/%s(zSrc)?m=%s(zMime)"></img> style_submenu_element("Image", "%R/raw/%s?m=%s", zSrc, zMime); }else{ int sz = db_int(0, "SELECT size FROM blob WHERE rid=%d", ridSrc); @ <i>(file is %d(sz) bytes of binary data)</i> } @ </blockquote> manifest_destroy(pAttach); blob_reset(&attach); |
︙ | ︙ | |||
697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 | ** is to be made. The attachment will be ** to the most recently modified tech note ** with the specified timestamp. ** -t|--technote TECHNOTE-ID Specifies the technote to be ** updated by its technote id. ** ** One of PAGENAME, DATETIME or TECHNOTE-ID must be specified. */ void attachment_cmd(void){ int n; db_find_and_open_repository(0, 0); if( g.argc<3 ){ goto attachment_cmd_usage; } n = strlen(g.argv[2]); if( n==0 ){ goto attachment_cmd_usage; } if( strncmp(g.argv[2],"add",n)==0 ){ | > > > > > > | | | 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 | ** is to be made. The attachment will be ** to the most recently modified tech note ** with the specified timestamp. ** -t|--technote TECHNOTE-ID Specifies the technote to be ** updated by its technote id. ** ** One of PAGENAME, DATETIME or TECHNOTE-ID must be specified. ** ** DATETIME may be "now" or "YYYY-MM-DDTHH:MM:SS.SSS". If in ** year-month-day form, it may be truncated, the "T" may be replaced by ** a space, and it may also name a timezone offset from UTC as "-HH:MM" ** (westward) or "+HH:MM" (eastward). Either no timezone suffix or "Z" ** means UTC. */ void attachment_cmd(void){ int n; db_find_and_open_repository(0, 0); if( g.argc<3 ){ goto attachment_cmd_usage; } n = strlen(g.argv[2]); if( n==0 ){ goto attachment_cmd_usage; } if( strncmp(g.argv[2],"add",n)==0 ){ const char *zPageName = 0; /* Name of the wiki page to attach to */ const char *zFile; /* Name of the file to be attached */ const char *zETime; /* The name of the technote to attach to */ Manifest *pWiki = 0; /* Parsed wiki page content */ char *zBody = 0; /* Wiki page content */ int rid; const char *zTarget; /* Target of the attachment */ Blob content; /* The content of the attachment */ zETime = find_option("technote","t",1); if( !zETime ){ if( g.argc!=5 ){ usage("add PAGENAME FILENAME"); } zPageName = g.argv[3]; rid = db_int(0, "SELECT x.rid FROM tag t, tagxref x" " WHERE x.tagid=t.tagid AND t.tagname='wiki-%q'" " ORDER BY x.mtime DESC LIMIT 1", zPageName ); if( (pWiki = manifest_get(rid, CFTYPE_WIKI, 0))!=0 ){ zBody = pWiki->zWiki; } if( zBody==0 ){ fossil_fatal("wiki page [%s] not found",zPageName); } zTarget = zPageName; |
︙ | ︙ |
Changes to src/bisect.c.
︙ | ︙ | |||
73 74 75 76 77 78 79 | /* ** Return the value of a boolean bisect option. */ int bisect_option(const char *zName){ unsigned int i; int r = -1; | | | 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 | /* ** Return the value of a boolean bisect option. */ int bisect_option(const char *zName){ unsigned int i; int r = -1; for(i=0; i<count(aBisectOption); i++){ if( fossil_strcmp(zName, aBisectOption[i].zName)==0 ){ char *zLabel = mprintf("bisect-%s", zName); char *z = db_lget(zLabel, (char*)aBisectOption[i].zDefault); if( is_truth(z) ) r = 1; if( is_false(z) ) r = 0; if( r<0 ) r = is_truth(aBisectOption[i].zDefault); free(zLabel); |
︙ | ︙ | |||
404 405 406 407 408 409 410 | }else if( strncmp(zCmd, "log", n)==0 ){ bisect_chart(0); }else if( strncmp(zCmd, "chart", n)==0 ){ bisect_chart(1); }else if( strncmp(zCmd, "options", n)==0 ){ if( g.argc==3 ){ unsigned int i; | | | | | 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 | }else if( strncmp(zCmd, "log", n)==0 ){ bisect_chart(0); }else if( strncmp(zCmd, "chart", n)==0 ){ bisect_chart(1); }else if( strncmp(zCmd, "options", n)==0 ){ if( g.argc==3 ){ unsigned int i; for(i=0; i<count(aBisectOption); i++){ char *z = mprintf("bisect-%s", aBisectOption[i].zName); fossil_print(" %-15s %-6s ", aBisectOption[i].zName, db_lget(z, (char*)aBisectOption[i].zDefault)); fossil_free(z); comment_print(aBisectOption[i].zDesc, 0, 27, -1, g.comFmtFlags); } }else if( g.argc==4 || g.argc==5 ){ unsigned int i; n = strlen(g.argv[3]); for(i=0; i<count(aBisectOption); i++){ if( strncmp(g.argv[3], aBisectOption[i].zName, n)==0 ){ char *z = mprintf("bisect-%s", aBisectOption[i].zName); if( g.argc==5 ){ db_lset(z, g.argv[4]); } fossil_print("%s\n", db_lget(z, (char*)aBisectOption[i].zDefault)); fossil_free(z); break; } } if( i>=count(aBisectOption) ){ fossil_fatal("no such bisect option: %s", g.argv[3]); } }else{ usage("options ?NAME? ?VALUE?"); } }else if( strncmp(zCmd, "reset", n)==0 ){ db_multi_exec( |
︙ | ︙ |
Changes to src/blob.c.
︙ | ︙ | |||
651 652 653 654 655 656 657 | /* ** Return true if the blob contains a valid UUID_SIZE-digit base16 identifier. */ int blob_is_uuid(Blob *pBlob){ return blob_size(pBlob)==UUID_SIZE && validate16(blob_buffer(pBlob), UUID_SIZE); } | | > > > | > > > > > > > > > > > > > > > > > > > > > > | 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 | /* ** Return true if the blob contains a valid UUID_SIZE-digit base16 identifier. */ int blob_is_uuid(Blob *pBlob){ return blob_size(pBlob)==UUID_SIZE && validate16(blob_buffer(pBlob), UUID_SIZE); } /* ** Return true if the blob contains a valid filename */ int blob_is_filename(Blob *pBlob){ return file_is_simple_pathname(blob_str(pBlob), 1); } /* ** Return true if the blob contains a valid 32-bit integer. Store ** the integer value in *pValue. */ int blob_is_int(Blob *pBlob, int *pValue){ const char *z = blob_buffer(pBlob); int i, n, c, v; n = blob_size(pBlob); v = 0; for(i=0; i<n && (c = z[i])!=0 && c>='0' && c<='9'; i++){ v = v*10 + c - '0'; } if( i==n ){ *pValue = v; return 1; }else{ return 0; } } /* ** Return true if the blob contains a valid 64-bit integer. Store ** the integer value in *pValue. */ int blob_is_int64(Blob *pBlob, sqlite3_int64 *pValue){ const char *z = blob_buffer(pBlob); int i, n, c; sqlite3_int64 v; n = blob_size(pBlob); v = 0; for(i=0; i<n && (c = z[i])!=0 && c>='0' && c<='9'; i++){ v = v*10 + c - '0'; } if( i==n ){ *pValue = v; return 1; |
︙ | ︙ | |||
824 825 826 827 828 829 830 | ** Return the number of bytes written. */ int blob_write_to_file(Blob *pBlob, const char *zFilename){ FILE *out; int nWrote; if( zFilename[0]==0 || (zFilename[0]=='-' && zFilename[1]==0) ){ | | | | < | > > > > > > > | 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 | ** Return the number of bytes written. */ int blob_write_to_file(Blob *pBlob, const char *zFilename){ FILE *out; int nWrote; if( zFilename[0]==0 || (zFilename[0]=='-' && zFilename[1]==0) ){ blob_is_init(pBlob); #if defined(_WIN32) nWrote = fossil_utf8_to_console(blob_buffer(pBlob), blob_size(pBlob), 0); if( nWrote>=0 ) return nWrote; fflush(stdout); _setmode(_fileno(stdout), _O_BINARY); #endif nWrote = fwrite(blob_buffer(pBlob), 1, blob_size(pBlob), stdout); #if defined(_WIN32) fflush(stdout); _setmode(_fileno(stdout), _O_TEXT); #endif }else{ file_mkfolder(zFilename, 1, 0); out = fossil_fopen(zFilename, "wb"); if( out==0 ){ #if _WIN32 const char *zReserved = file_is_win_reserved(zFilename); if( zReserved ){ fossil_fatal("cannot open \"%s\" because \"%s\" is " "a reserved name on Windows", zFilename, zReserved); } #endif fossil_fatal_recursive("unable to open file \"%s\" for writing", zFilename); return 0; } blob_is_init(pBlob); nWrote = fwrite(blob_buffer(pBlob), 1, blob_size(pBlob), out); fclose(out); |
︙ | ︙ | |||
1001 1002 1003 1004 1005 1006 1007 | /* ** COMMAND: test-uncompress ** ** Usage: %fossil test-uncompress IN OUT ** ** Read the content of file IN, uncompress that content, and write the | | | 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 | /* ** COMMAND: test-uncompress ** ** Usage: %fossil test-uncompress IN OUT ** ** Read the content of file IN, uncompress that content, and write the ** result into OUT. This command is intended for testing of the ** blob_compress() function. */ void uncompress_cmd(void){ Blob f; if( g.argc!=4 ) usage("INPUTFILE OUTPUTFILE"); blob_read_from_file(&f, g.argv[2]); blob_uncompress(&f, &f); |
︙ | ︙ |
Changes to src/branch.c.
︙ | ︙ | |||
152 153 154 155 156 157 158 | brid = content_put_ex(&branch, 0, 0, 0, isPrivate); if( brid==0 ){ fossil_fatal("trouble committing manifest: %s", g.zErrMsg); } db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d)", brid); if( manifest_crosslink(brid, &branch, MC_PERMIT_HOOKS)==0 ){ | | | 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 | brid = content_put_ex(&branch, 0, 0, 0, isPrivate); if( brid==0 ){ fossil_fatal("trouble committing manifest: %s", g.zErrMsg); } db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d)", brid); if( manifest_crosslink(brid, &branch, MC_PERMIT_HOOKS)==0 ){ fossil_fatal("%s", g.zErrMsg); } assert( blob_is_reset(&branch) ); content_deltify(rootid, brid, 0); zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", brid); fossil_print("New branch: %s\n", zUuid); if( g.argc==3 ){ fossil_print( |
︙ | ︙ | |||
174 175 176 177 178 179 180 | } /* Commit */ db_end_transaction(0); /* Do an autosync push, if requested */ | | | | 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 | } /* Commit */ db_end_transaction(0); /* Do an autosync push, if requested */ if( !isPrivate ) autosync_loop(SYNC_PUSH, db_get_int("autosync-tries",1),0); } #if INTERFACE /* ** Allows bits in the mBplqFlags parameter to branch_prepare_list_query(). */ #define BRL_CLOSED_ONLY 0x001 /* Show only closed branches */ #define BRL_OPEN_ONLY 0x002 /* Show only open branches */ #define BRL_BOTH 0x003 /* Show both open and closed branches */ #define BRL_OPEN_CLOSED_MASK 0x003 #define BRL_MTIME 0x004 /* Include lastest check-in time */ #define BRL_ORDERBY_MTIME 0x008 /* Sort by MTIME. (otherwise sort by name)*/ #endif /* INTERFACE */ /* ** Prepare a query that will list branches. ** ** If (which<0) then the query pulls only closed branches. If |
︙ | ︙ | |||
256 257 258 259 260 261 262 263 264 265 266 267 268 269 | ** Supported options for this subcommand include: ** --private branch is private (i.e., remains local) ** --bgcolor COLOR use COLOR instead of automatic background ** --nosign do not sign contents on this branch ** --date-override DATE DATE to use instead of 'now' ** --user-override USER USER to use instead of the current default ** ** %fossil branch list ?-a|--all|-c|--closed? ** %fossil branch ls ?-a|--all|-c|--closed? ** ** List all branches. Use -a or --all to list all branches and ** -c or --closed to list all closed branches. The default is to ** show only open branches. ** | > > > > > > | 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 | ** Supported options for this subcommand include: ** --private branch is private (i.e., remains local) ** --bgcolor COLOR use COLOR instead of automatic background ** --nosign do not sign contents on this branch ** --date-override DATE DATE to use instead of 'now' ** --user-override USER USER to use instead of the current default ** ** DATE may be "now" or "YYYY-MM-DDTHH:MM:SS.SSS". If in ** year-month-day form, it may be truncated, the "T" may be ** replaced by a space, and it may also name a timezone offset ** from UTC as "-HH:MM" (westward) or "+HH:MM" (eastward). ** Either no timezone suffix or "Z" means UTC. ** ** %fossil branch list ?-a|--all|-c|--closed? ** %fossil branch ls ?-a|--all|-c|--closed? ** ** List all branches. Use -a or --all to list all branches and ** -c or --closed to list all closed branches. The default is to ** show only open branches. ** |
︙ | ︙ | |||
315 316 317 318 319 320 321 | @ (SELECT tagxref.value @ FROM plink CROSS JOIN tagxref @ WHERE plink.pid=event.objid @ AND tagxref.rid=plink.cid @ AND tagxref.tagid=(SELECT tagid FROM tag WHERE tagname='branch') @ AND tagtype>0), @ count(*), | | > > > > > > > > > > > > > > | > | 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 | @ (SELECT tagxref.value @ FROM plink CROSS JOIN tagxref @ WHERE plink.pid=event.objid @ AND tagxref.rid=plink.cid @ AND tagxref.tagid=(SELECT tagid FROM tag WHERE tagname='branch') @ AND tagtype>0), @ count(*), @ (SELECT uuid FROM blob WHERE rid=tagxref.rid), @ event.bgcolor @ FROM tagxref, tag, event @ WHERE tagxref.tagid=tag.tagid @ AND tagxref.tagtype>0 @ AND tag.tagname='branch' @ AND event.objid=tagxref.rid @ GROUP BY 1 @ ORDER BY 2 DESC; ; /* ** This is the new-style branch-list page that shows the branch names ** together with their ages (time of last check-in) and whether or not ** they are closed or merged to another branch. ** ** Control jumps to this routine from brlist_page() (the /brlist handler) ** if there are no query parameters. */ static void new_brlist_page(void){ Stmt q; double rNow; int show_colors = PB("colors"); login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } style_header("Branches"); style_adunit_config(ADUNIT_RIGHT_OK); style_submenu_checkbox("colors", "Use Branch Colors", 0); login_anonymous_available(); db_prepare(&q, brlistQuery/*works-like:""*/); rNow = db_double(0.0, "SELECT julianday('now')"); @ <div class="brlist"><table id="branchlisttable"> @ <thead><tr> @ <th>Branch Name</th> @ <th>Age</th> @ <th>Check-ins</th> @ <th>Status</th> @ <th>Resolution</th> @ </tr></thead><tbody> while( db_step(&q)==SQLITE_ROW ){ const char *zBranch = db_column_text(&q, 0); double rMtime = db_column_double(&q, 1); int isClosed = db_column_int(&q, 2); const char *zMergeTo = db_column_text(&q, 3); int nCkin = db_column_int(&q, 4); const char *zLastCkin = db_column_text(&q, 5); const char *zBgClr = db_column_text(&q, 6); char *zAge = human_readable_age(rNow - rMtime); sqlite3_int64 iMtime = (sqlite3_int64)(rMtime*86400.0); if( zMergeTo && zMergeTo[0]==0 ) zMergeTo = 0; if( zBgClr == 0 ){ if( zBranch==0 || strcmp(zBranch,"trunk")==0 ){ zBgClr = 0; }else{ zBgClr = hash_color(zBranch); } } if( zBgClr && zBgClr[0] && show_colors ){ @ <tr style="background-color:%s(zBgClr)"> }else{ @ <tr> } @ <td>%z(href("%R/timeline?n=100&r=%T",zBranch))%h(zBranch)</a></td> @ <td data-sortkey="%016llx(-iMtime)">%s(zAge)</td> @ <td>%d(nCkin)</td> fossil_free(zAge); @ <td>%s(isClosed?"closed":"")</td> if( zMergeTo ){ @ <td>merged into |
︙ | ︙ | |||
419 420 421 422 423 424 425 | showAll = 1; } if( showAll ) brFlags = BRL_BOTH; if( showClosed ) brFlags = BRL_CLOSED_ONLY; style_header("%s", showClosed ? "Closed Branches" : showAll ? "All Branches" : "Open Branches"); | | | | | | | | | | | 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 | showAll = 1; } if( showAll ) brFlags = BRL_BOTH; if( showClosed ) brFlags = BRL_CLOSED_ONLY; style_header("%s", showClosed ? "Closed Branches" : showAll ? "All Branches" : "Open Branches"); style_submenu_element("Timeline", "brtimeline"); if( showClosed ){ style_submenu_element("All", "brlist?all"); style_submenu_element("Open", "brlist?open"); }else if( showAll ){ style_submenu_element("Closed", "brlist?closed"); style_submenu_element("Open", "brlist"); }else{ style_submenu_element("All", "brlist?all"); style_submenu_element("Closed", "brlist?closed"); } if( !colorTest ){ style_submenu_element("Color-Test", "brlist?colortest"); }else{ style_submenu_element("All", "brlist?all"); } login_anonymous_available(); #if 0 style_sidebox_begin("Nomenclature:", "33%"); @ <ol> @ <li> An <div class="sideboxDescribed">%z(href("brlist")) @ open branch</a></div> is a branch that has one or more |
︙ | ︙ | |||
521 522 523 524 525 526 527 | void brtimeline_page(void){ Stmt q; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } style_header("Branches"); | | | 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 | void brtimeline_page(void){ Stmt q; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } style_header("Branches"); style_submenu_element("List", "brlist"); login_anonymous_available(); @ <h2>The initial check-in for each branch:</h2> db_prepare(&q, "%s AND blob.rid IN (SELECT rid FROM tagxref" " WHERE tagtype>0 AND tagid=%d AND srcid!=0)" " ORDER BY event.mtime DESC", timeline_query_for_www(), TAG_BRANCH ); www_print_timeline(&q, 0, 0, 0, 0, brtimeline_extra); db_finalize(&q); style_footer(); } |
Changes to src/browse.c.
︙ | ︙ | |||
169 170 171 172 173 174 175 | /* Compute the title of the page */ blob_zero(&dirname); if( zD ){ blob_append(&dirname, "in directory ", -1); hyperlinked_path(zD, &dirname, zCI, "dir", ""); zPrefix = mprintf("%s/", zD); | | | | < | < | < | | 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 | /* Compute the title of the page */ blob_zero(&dirname); if( zD ){ blob_append(&dirname, "in directory ", -1); hyperlinked_path(zD, &dirname, zCI, "dir", ""); zPrefix = mprintf("%s/", zD); style_submenu_element("Top-Level", "%s", url_render(&sURI, "name", 0, 0, 0)); }else{ blob_append(&dirname, "in the top-level directory", -1); zPrefix = ""; } if( linkTrunk ){ style_submenu_element("Trunk", "%s", url_render(&sURI, "ci", "trunk", 0, 0)); } if( linkTip ){ style_submenu_element("Tip", "%s", url_render(&sURI, "ci", "tip", 0, 0)); } if( zCI ){ @ <h2>Files of check-in [%z(href("vinfo?name=%!S",zUuid))%S(zUuid)</a>] @ %s(blob_str(&dirname))</h2> zSubdirLink = mprintf("%R/dir?ci=%!S&name=%T", zUuid, zPrefix); if( nD==0 ){ style_submenu_element("File Ages", "%R/fileage?name=%!S", zUuid); } }else{ @ <h2>The union of all files from all check-ins @ %s(blob_str(&dirname))</h2> zSubdirLink = mprintf("%R/dir?name=%T", zPrefix); } style_submenu_element("All", "%s", url_render(&sURI, "ci", 0, 0, 0)); style_submenu_element("Tree-View", "%s", url_render(&sURI, "type", "tree", 0, 0)); /* Compute the temporary table "localfiles" containing the names ** of all files and subdirectories in the zD[] directory. ** ** Subdirectory names begin with "/". This causes them to sort ** first and it also gives us an easy way to distinguish files |
︙ | ︙ | |||
614 615 616 617 618 619 620 | /* Compute the title of the page */ blob_zero(&dirname); if( zD ){ blob_append(&dirname, "within directory ", -1); hyperlinked_path(zD, &dirname, zCI, "tree", zREx); if( zRE ) blob_appendf(&dirname, " matching \"%s\"", zRE); | | | < | < | | < | | 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 | /* Compute the title of the page */ blob_zero(&dirname); if( zD ){ blob_append(&dirname, "within directory ", -1); hyperlinked_path(zD, &dirname, zCI, "tree", zREx); if( zRE ) blob_appendf(&dirname, " matching \"%s\"", zRE); style_submenu_element("Top-Level", "%s", url_render(&sURI, "name", 0, 0, 0)); }else{ if( zRE ){ blob_appendf(&dirname, "matching \"%s\"", zRE); } } style_submenu_binary("mtime","Sort By Time","Sort By Filename", 0); if( zCI ){ style_submenu_element("All", "%s", url_render(&sURI, "ci", 0, 0, 0)); if( nD==0 && !showDirOnly ){ style_submenu_element("File Ages", "%R/fileage?name=%s", zUuid); } } if( linkTrunk ){ style_submenu_element("Trunk", "%s", url_render(&sURI, "ci", "trunk", 0, 0)); } if( linkTip ){ style_submenu_element("Tip", "%s", url_render(&sURI, "ci", "tip", 0, 0)); } style_submenu_element("Flat-View", "%s", url_render(&sURI, "type", "flat", 0, 0)); /* Compute the file hierarchy. */ if( zCI ){ Stmt q; compute_fileage(rid, 0); |
︙ | ︙ | |||
693 694 695 696 697 698 699 | } if( showDirOnly ){ for(nFile=0, p=sTree.pFirst; p; p=p->pNext){ if( p->pChild!=0 && p->nFullName>nD ) nFile++; } zObjType = "Folders"; | < < < < > > | 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 | } if( showDirOnly ){ for(nFile=0, p=sTree.pFirst; p; p=p->pNext){ if( p->pChild!=0 && p->nFullName>nD ) nFile++; } zObjType = "Folders"; }else{ zObjType = "Files"; } style_submenu_checkbox("nofiles", "Folders Only", 0); if( zCI ){ @ <h2>%s(zObjType) from if( sqlite3_strnicmp(zCI, zUuid, (int)strlen(zCI))!=0 ){ @ "%h(zCI)" } @ [%z(href("vinfo?name=%!S",zUuid))%S(zUuid)</a>] %s(blob_str(&dirname)) |
︙ | ︙ | |||
1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 | const char *zUuid; const char *zNow; /* Time of check-in */ int showId = PB("showid"); Stmt q1, q2; double baseTime; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } zName = P("name"); if( zName==0 ) zName = "tip"; rid = symbolic_name_to_rid(zName, "ci"); if( rid==0 ){ fossil_fatal("not a valid check-in: %s", zName); } zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", rid); baseTime = db_double(0.0,"SELECT mtime FROM event WHERE objid=%d", rid); zNow = db_text("", "SELECT datetime(mtime,toLocal()) FROM event" " WHERE objid=%d", rid); | > | < < | 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 | const char *zUuid; const char *zNow; /* Time of check-in */ int showId = PB("showid"); Stmt q1, q2; double baseTime; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } if( exclude_spiders() ) return; zName = P("name"); if( zName==0 ) zName = "tip"; rid = symbolic_name_to_rid(zName, "ci"); if( rid==0 ){ fossil_fatal("not a valid check-in: %s", zName); } zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", rid); baseTime = db_double(0.0,"SELECT mtime FROM event WHERE objid=%d", rid); zNow = db_text("", "SELECT datetime(mtime,toLocal()) FROM event" " WHERE objid=%d", rid); style_submenu_element("Tree-View", "%R/tree?ci=%T&mtime=1&type=tree", zName); style_header("File Ages"); zGlob = P("glob"); compute_fileage(rid,zGlob); db_multi_exec("CREATE INDEX fileage_ix1 ON fileage(mid,pathname);"); @ <h2>Files in @ %z(href("%R/info/%!S",zUuid))[%S(zUuid)]</a> |
︙ | ︙ | |||
1085 1086 1087 1088 1089 1090 1091 | @ <td> db_bind_int(&q2, ":mid", mid); while( db_step(&q2)==SQLITE_ROW ){ const char *zFUuid = db_column_text(&q2,0); const char *zFile = db_column_text(&q2,1); int fid = db_column_int(&q2,2); if( showId ){ | | | | 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 | @ <td> db_bind_int(&q2, ":mid", mid); while( db_step(&q2)==SQLITE_ROW ){ const char *zFUuid = db_column_text(&q2,0); const char *zFile = db_column_text(&q2,1); int fid = db_column_int(&q2,2); if( showId ){ @ %z(href("%R/artifact/%!S",zFUuid))%h(zFile)</a> (%d(fid))<br /> }else{ @ %z(href("%R/artifact/%!S",zFUuid))%h(zFile)</a><br /> } } db_reset(&q2); @ </td> @ <td> @ %z(href("%R/info/%!S",zUuid))[%S(zUuid)]</a> if( showId ){ |
︙ | ︙ |
Changes to src/builtin.c.
︙ | ︙ | |||
31 32 33 34 35 36 37 | /* ** Return a pointer to built-in content */ const unsigned char *builtin_file(const char *zFilename, int *piSize){ int lwr, upr, i, c; lwr = 0; | | | 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 | /* ** Return a pointer to built-in content */ const unsigned char *builtin_file(const char *zFilename, int *piSize){ int lwr, upr, i, c; lwr = 0; upr = count(aBuiltinFiles) - 1; while( upr>=lwr ){ i = (upr+lwr)/2; c = strcmp(aBuiltinFiles[i].zName,zFilename); if( c<0 ){ lwr = i+1; }else if( c>0 ){ upr = i-1; |
︙ | ︙ | |||
58 59 60 61 62 63 64 | /* ** COMMAND: test-builtin-list ** ** List the names and sizes of all built-in resources. */ void test_builtin_list(void){ int i; | | | 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 | /* ** COMMAND: test-builtin-list ** ** List the names and sizes of all built-in resources. */ void test_builtin_list(void){ int i; for(i=0; i<count(aBuiltinFiles); i++){ fossil_print("%-30s %6d\n", aBuiltinFiles[i].zName,aBuiltinFiles[i].nByte); } } /* ** COMMAND: test-builtin-get ** |
︙ | ︙ |
Changes to src/bundle.c.
︙ | ︙ | |||
24 25 26 27 28 29 30 | /* ** SQL code used to initialize the schema of a bundle. ** ** The bblob.delta field can be an integer, a text string, or NULL. ** If an integer, then the corresponding blobid is the delta basis. ** If a text string, then that string is a SHA1 hash for the delta ** basis, which is presumably in the master repository. If NULL, then | | | 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 | /* ** SQL code used to initialize the schema of a bundle. ** ** The bblob.delta field can be an integer, a text string, or NULL. ** If an integer, then the corresponding blobid is the delta basis. ** If a text string, then that string is a SHA1 hash for the delta ** basis, which is presumably in the master repository. If NULL, then ** data contains content without delta compression. */ static const char zBundleInit[] = @ CREATE TABLE IF NOT EXISTS "%w".bconfig( @ bcname TEXT, @ bcvalue ANY @ ); @ CREATE TABLE IF NOT EXISTS "%w".bblob( |
︙ | ︙ | |||
311 312 313 314 315 316 317 | db_multi_exec( "INSERT INTO bconfig(bcname,bcvalue)" " VALUES('mtime',datetime('now'));" ); db_multi_exec( "INSERT INTO bconfig(bcname,bcvalue)" " SELECT name, value FROM config" | | | 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 | db_multi_exec( "INSERT INTO bconfig(bcname,bcvalue)" " VALUES('mtime',datetime('now'));" ); db_multi_exec( "INSERT INTO bconfig(bcname,bcvalue)" " SELECT name, value FROM config" " WHERE name IN ('project-code','parent-project-code');" ); /* Directly copy content from the repository into the bundle as long ** as the repository content is a delta from some other artifact that ** is also in the bundle. */ db_multi_exec( |
︙ | ︙ | |||
368 369 370 371 372 373 374 | deltaFrom = db_int(0, "SELECT max(fid) FROM mlink" " WHERE fnid=(SELECT fnid FROM mlink WHERE fid=%d)" " AND fid<%d", rid, mnToBundle); } } | | | 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 | deltaFrom = db_int(0, "SELECT max(fid) FROM mlink" " WHERE fnid=(SELECT fnid FROM mlink WHERE fid=%d)" " AND fid<%d", rid, mnToBundle); } } /* Try to insert the artifact as a delta */ if( deltaFrom ){ Blob basis, delta; content_get(deltaFrom, &basis); blob_delta_create(&basis, &content, &delta); if( blob_size(&delta)>0.9*blob_size(&content) ){ deltaFrom = 0; |
︙ | ︙ | |||
759 760 761 762 763 764 765 | ** any check-ins that are descendants of check-ins already in the bundle, ** and any tags that apply to artifacts in the bundle. ** ** fossil bundle import BUNDLE ?--publish? ** ** Import all content from BUNDLE into the repository. By default, the ** imported files are private and will not sync. Use the --publish | | | 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 | ** any check-ins that are descendants of check-ins already in the bundle, ** and any tags that apply to artifacts in the bundle. ** ** fossil bundle import BUNDLE ?--publish? ** ** Import all content from BUNDLE into the repository. By default, the ** imported files are private and will not sync. Use the --publish ** option to make the import public. ** ** fossil bundle ls BUNDLE ** ** List the contents of BUNDLE on standard output ** ** fossil bundle purge BUNDLE ** |
︙ | ︙ |
Changes to src/cache.c.
︙ | ︙ | |||
235 236 237 238 239 240 241 | */ void cache_initialize(void){ sqlite3_close(cacheOpen(1)); } /* ** COMMAND: cache* | | | 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 | */ void cache_initialize(void){ sqlite3_close(cacheOpen(1)); } /* ** COMMAND: cache* ** ** Usage: %fossil cache SUBCOMMAND ** ** Manage the cache used for potentially expensive web pages such as ** /zip and /tarball. SUBCOMMAND can be: ** ** clear Remove all entries from the cache. ** |
︙ | ︙ | |||
354 355 356 357 358 359 360 | " FROM cache" " ORDER BY tm DESC" ); if( pStmt ){ @ <ol> while( sqlite3_step(pStmt)==SQLITE_ROW ){ const unsigned char *zName = sqlite3_column_text(pStmt,0); | | | 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 | " FROM cache" " ORDER BY tm DESC" ); if( pStmt ){ @ <ol> while( sqlite3_step(pStmt)==SQLITE_ROW ){ const unsigned char *zName = sqlite3_column_text(pStmt,0); @ <li><p>%z(href("%R/cacheget?key=%T",zName))%h(zName)</a><br /> @ size: %s(sqlite3_column_text(pStmt,1)) @ hit-count: %d(sqlite3_column_int(pStmt,2)) @ last-access: %s(sqlite3_column_text(pStmt,3))</p></li> } sqlite3_finalize(pStmt); @ </ol> } |
︙ | ︙ |
Changes to src/captcha.c.
︙ | ︙ | |||
576 577 578 579 580 581 582 | /* ** Check to see if the current request is coming from an agent that might ** be a spider. If the agent is not a spider, then return 0 without doing ** anything. But if the user agent appears to be a spider, offer ** a captcha challenge to allow the user agent to prove that it is human ** and return non-zero. */ | | | 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 | /* ** Check to see if the current request is coming from an agent that might ** be a spider. If the agent is not a spider, then return 0 without doing ** anything. But if the user agent appears to be a spider, offer ** a captcha challenge to allow the user agent to prove that it is human ** and return non-zero. */ int exclude_spiders(void){ const char *zCookieValue; char *zCookieName; if( g.isHuman ) return 0; #if 0 { const char *zReferer = P("HTTP_REFERER"); if( zReferer && strncmp(g.zBaseURL, zReferer, strlen(g.zBaseURL))==0 ){ |
︙ | ︙ | |||
598 599 600 601 602 603 604 | if( captcha_is_correct() ){ cgi_set_cookie(zCookieName, "1", login_cookie_path(), 8*3600); return 0; } /* This appears to be a spider. Offer the captcha */ style_header("Verification"); | | | 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 | if( captcha_is_correct() ){ cgi_set_cookie(zCookieName, "1", login_cookie_path(), 8*3600); return 0; } /* This appears to be a spider. Offer the captcha */ style_header("Verification"); @ <form method='POST' action='%s(g.zPath)'> cgi_query_parameters_to_hidden(); @ <p>Please demonstrate that you are human, not a spider or robot</p> captcha_generate(1); @ </form> style_footer(); return 1; } |
Changes to src/cgi.c.
︙ | ︙ | |||
19 20 21 22 23 24 25 26 27 28 29 30 31 32 | ** services to CGI programs. There are procedures for parsing and ** dispensing QUERY_STRING parameters and cookies, the "mprintf()" ** formatting function and its cousins, and routines to encode and ** decode strings in HTML or HTTP. */ #include "config.h" #ifdef _WIN32 # include <winsock2.h> # include <ws2tcpip.h> #else # include <sys/socket.h> # include <netinet/in.h> # include <arpa/inet.h> # include <sys/times.h> | > > > | 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 | ** services to CGI programs. There are procedures for parsing and ** dispensing QUERY_STRING parameters and cookies, the "mprintf()" ** formatting function and its cousins, and routines to encode and ** decode strings in HTML or HTTP. */ #include "config.h" #ifdef _WIN32 # if !defined(_WIN32_WINNT) # define _WIN32_WINNT 0x0501 # endif # include <winsock2.h> # include <ws2tcpip.h> #else # include <sys/socket.h> # include <netinet/in.h> # include <arpa/inet.h> # include <sys/times.h> |
︙ | ︙ | |||
478 479 480 481 482 483 484 485 486 487 488 489 490 491 | ** is its fully decoded value. ** ** Copies are made of both the zName and zValue parameters. */ void cgi_set_parameter(const char *zName, const char *zValue){ cgi_set_parameter_nocopy(mprintf("%s",zName), mprintf("%s",zValue), 0); } /* ** Replace a parameter with a new value. */ void cgi_replace_parameter(const char *zName, const char *zValue){ int i; for(i=0; i<nUsedQP; i++){ | > > > | 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 | ** is its fully decoded value. ** ** Copies are made of both the zName and zValue parameters. */ void cgi_set_parameter(const char *zName, const char *zValue){ cgi_set_parameter_nocopy(mprintf("%s",zName), mprintf("%s",zValue), 0); } void cgi_set_query_parameter(const char *zName, const char *zValue){ cgi_set_parameter_nocopy(mprintf("%s",zName), mprintf("%s",zValue), 1); } /* ** Replace a parameter with a new value. */ void cgi_replace_parameter(const char *zName, const char *zValue){ int i; for(i=0; i<nUsedQP; i++){ |
︙ | ︙ | |||
503 504 505 506 507 508 509 510 511 512 513 514 515 516 | aParamQP[i].zValue = zValue; assert( aParamQP[i].isQP ); return; } } cgi_set_parameter_nocopy(zName, zValue, 1); } /* ** Add a query parameter. The zName portion is fixed but a copy ** must be made of zValue. */ void cgi_setenv(const char *zName, const char *zValue){ cgi_set_parameter_nocopy(zName, mprintf("%s",zValue), 0); | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 | aParamQP[i].zValue = zValue; assert( aParamQP[i].isQP ); return; } } cgi_set_parameter_nocopy(zName, zValue, 1); } /* ** Delete a parameter. */ void cgi_delete_parameter(const char *zName){ int i; for(i=0; i<nUsedQP; i++){ if( fossil_strcmp(aParamQP[i].zName,zName)==0 ){ --nUsedQP; if( i<nUsedQP ){ memmove(aParamQP+i, aParamQP+i+1, sizeof(*aParamQP)*(nUsedQP-i)); } return; } } } void cgi_delete_query_parameter(const char *zName){ int i; for(i=0; i<nUsedQP; i++){ if( fossil_strcmp(aParamQP[i].zName,zName)==0 ){ assert( aParamQP[i].isQP ); --nUsedQP; if( i<nUsedQP ){ memmove(aParamQP+i, aParamQP+i+1, sizeof(*aParamQP)*(nUsedQP-i)); } return; } } } /* ** Add a query parameter. The zName portion is fixed but a copy ** must be made of zValue. */ void cgi_setenv(const char *zName, const char *zValue){ cgi_set_parameter_nocopy(zName, mprintf("%s",zValue), 0); |
︙ | ︙ | |||
714 715 716 717 718 719 720 | cgi_set_parameter_nocopy(mprintf("%s:bytes", zName), mprintf("%d",nContent), 1); } } zName = 0; showBytes = 0; }else{ | | | 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 | cgi_set_parameter_nocopy(mprintf("%s:bytes", zName), mprintf("%d",nContent), 1); } } zName = 0; showBytes = 0; }else{ nArg = tokenize_line(zLine, count(azArg), azArg); for(i=0; i<nArg; i++){ int c = fossil_tolower(azArg[i][0]); int n = strlen(azArg[i]); if( c=='c' && sqlite3_strnicmp(azArg[i],"content-disposition:",n)==0 ){ i++; }else if( c=='n' && sqlite3_strnicmp(azArg[i],"name=",n)==0 ){ zName = azArg[++i]; |
︙ | ︙ |
Changes to src/checkin.c.
︙ | ︙ | |||
19 20 21 22 23 24 25 | ** from the local repository. */ #include "config.h" #include "checkin.h" #include <assert.h> /* | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | < < < < | < < < | > > > > > > > | > > > | | | | > > > > > > > > | > > | > > > > > > > > > > | > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > | > | > > > > > > | | > > > | > > | > | > | > > > > > > > > > > > > > > > > | | | | | < | < < < < < < < < < < < | < < < < | < < < < < < < < < < < < < < < < < < < < < < | < < > > > > | | | > > > > | | | | | > | > > | | | | > | 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 | ** from the local repository. */ #include "config.h" #include "checkin.h" #include <assert.h> /* ** Change filter options. */ enum { /* Zero-based bit indexes. */ CB_EDITED , CB_UPDATED , CB_CHANGED, CB_MISSING , CB_ADDED, CB_DELETED, CB_RENAMED, CB_CONFLICT, CB_META , CB_UNCHANGED, CB_EXTRA, CB_MERGE , CB_RELPATH, CB_CLASSIFY, CB_MTIME , CB_SIZE , CB_FATAL, CB_COMMENT, /* Bitmask values. */ C_EDITED = 1 << CB_EDITED, /* Edited, merged, and conflicted files. */ C_UPDATED = 1 << CB_UPDATED, /* Files updated by merge/integrate. */ C_CHANGED = 1 << CB_CHANGED, /* Treated the same as the above two. */ C_MISSING = 1 << CB_MISSING, /* Missing and non- files. */ C_ADDED = 1 << CB_ADDED, /* Added files. */ C_DELETED = 1 << CB_DELETED, /* Deleted files. */ C_RENAMED = 1 << CB_RENAMED, /* Renamed files. */ C_CONFLICT = 1 << CB_CONFLICT, /* Files having merge conflicts. */ C_META = 1 << CB_META, /* Files with metadata changes. */ C_UNCHANGED = 1 << CB_UNCHANGED, /* Unchanged files. */ C_EXTRA = 1 << CB_EXTRA, /* Unmanaged files. */ C_MERGE = 1 << CB_MERGE, /* Merge contributors. */ C_FILTER = C_EDITED | C_UPDATED | C_CHANGED | C_MISSING | C_ADDED | C_DELETED | C_RENAMED | C_CONFLICT | C_META | C_UNCHANGED | C_EXTRA | C_MERGE, /* All filter bits. */ C_ALL = C_FILTER & ~(C_EXTRA | C_MERGE),/* All managed files. */ C_DIFFER = C_FILTER & ~(C_UNCHANGED | C_MERGE),/* All differences. */ C_RELPATH = 1 << CB_RELPATH, /* Show relative paths. */ C_CLASSIFY = 1 << CB_CLASSIFY, /* Show file change types. */ C_DEFAULT = (C_ALL & ~C_UNCHANGED) | C_MERGE | C_CLASSIFY, C_MTIME = 1 << CB_MTIME, /* Show file modification time. */ C_SIZE = 1 << CB_SIZE, /* Show file size in bytes. */ C_FATAL = 1 << CB_FATAL, /* Fail on MISSING/NOT_A_FILE. */ C_COMMENT = 1 << CB_COMMENT, /* Precede each line with "# ". */ }; /* ** Create a TEMP table named SFILE and add all unmanaged files named on ** the command-line to that table. If directories are named, then add ** all unmanaged files contained underneath those directories. If there ** are no files or directories named on the command-line, then add all ** unmanaged files anywhere in the checkout. */ static void locate_unmanaged_files( int argc, /* Number of command-line arguments to examine */ char **argv, /* values of command-line arguments */ unsigned scanFlags, /* Zero or more SCAN_xxx flags */ Glob *pIgnore /* Do not add files that match this GLOB */ ){ Blob name; /* Name of a candidate file or directory */ char *zName; /* Name of a candidate file or directory */ int isDir; /* 1 for a directory, 0 if doesn't exist, 2 for anything else */ int i; /* Loop counter */ int nRoot; /* length of g.zLocalRoot */ db_multi_exec("CREATE TEMP TABLE sfile(pathname TEXT PRIMARY KEY %s," " mtime INTEGER, size INTEGER)", filename_collation()); nRoot = (int)strlen(g.zLocalRoot); if( argc==0 ){ blob_init(&name, g.zLocalRoot, nRoot - 1); vfile_scan(&name, blob_size(&name), scanFlags, pIgnore, 0); blob_reset(&name); }else{ for(i=0; i<argc; i++){ file_canonical_name(argv[i], &name, 0); zName = blob_str(&name); isDir = file_wd_isdir(zName); if( isDir==1 ){ vfile_scan(&name, nRoot-1, scanFlags, pIgnore, 0); }else if( isDir==0 ){ fossil_warning("not found: %s", &zName[nRoot]); }else if( file_access(zName, R_OK) ){ fossil_fatal("cannot open %s", &zName[nRoot]); }else{ db_multi_exec( "INSERT OR IGNORE INTO sfile(pathname) VALUES(%Q)", &zName[nRoot] ); } blob_reset(&name); } } } /* ** Generate text describing all changes. ** ** We assume that vfile_check_signature has been run. */ static void status_report( Blob *report, /* Append the status report here */ unsigned flags /* Filter and other configuration flags */ ){ Stmt q; int nErr = 0; Blob rewrittenPathname; Blob sql = BLOB_INITIALIZER, where = BLOB_INITIALIZER; const char *zName; int i; /* Skip the file report if no files are requested at all. */ if( !(flags & (C_ALL | C_EXTRA)) ){ goto skipFiles; } /* Assemble the path-limiting WHERE clause, if any. */ blob_zero(&where); for(i=2; i<g.argc; i++){ Blob fname; file_tree_name(g.argv[i], &fname, 0, 1); zName = blob_str(&fname); if( fossil_strcmp(zName, ".")==0 ){ blob_reset(&where); break; } blob_append_sql(&where, " %s (pathname=%Q %s) " "OR (pathname>'%q/' %s AND pathname<'%q0' %s)", (blob_size(&where)>0) ? "OR" : "AND", zName, filename_collation(), zName, filename_collation(), zName, filename_collation() ); } /* Obtain the list of managed files if appropriate. */ blob_zero(&sql); if( flags & C_ALL ){ /* Start with a list of all managed files. */ blob_append_sql(&sql, "SELECT pathname, %s as mtime, %s as size, deleted, chnged, rid," " coalesce(origname!=pathname,0) AS renamed, islink, 1 AS managed" " FROM vfile LEFT JOIN blob USING (rid)" " WHERE is_selected(id)%s", flags & C_MTIME ? "datetime(checkin_mtime(:vid, rid), " "'unixepoch', toLocal())" : "''" /*safe-for-%s*/, flags & C_SIZE ? "coalesce(blob.size, 0)" : "0" /*safe-for-%s*/, blob_sql_text(&where)); /* Exclude unchanged files unless requested. */ if( !(flags & C_UNCHANGED) ){ blob_append_sql(&sql, " AND (chnged OR deleted OR rid=0 OR pathname!=origname)"); } } /* If C_EXTRA, add unmanaged files to the query result too. */ if( flags & C_EXTRA ){ if( blob_size(&sql) ){ blob_append_sql(&sql, " UNION ALL"); } blob_append_sql(&sql, " SELECT pathname, %s, %s, 0, 0, 0, 0, 0, 0" " FROM sfile WHERE pathname NOT IN (%s)%s", flags & C_MTIME ? "datetime(mtime, 'unixepoch', toLocal())" : "''", flags & C_SIZE ? "size" : "0", fossil_all_reserved_names(0), blob_sql_text(&where)); } blob_reset(&where); /* Pre-create the "ok" temporary table so the checkin_mtime() SQL function * does not lead to SQLITE_ABORT_ROLLBACK during execution of the OP_OpenRead * SQLite opcode. checkin_mtime() calls mtime_of_manifest_file() which * creates a temporary table if it doesn't already exist, thus invalidating * the prepared statement in the middle of its execution. */ db_multi_exec("CREATE TEMP TABLE IF NOT EXISTS ok(rid INTEGER PRIMARY KEY)"); /* Append an ORDER BY clause then compile the query. */ blob_append_sql(&sql, " ORDER BY pathname"); db_prepare(&q, "%s", blob_sql_text(&sql)); blob_reset(&sql); /* Bind the checkout version ID to the query if needed. */ if( (flags & C_ALL) && (flags & C_MTIME) ){ db_bind_int(&q, ":vid", db_lget_int("checkout", 0)); } /* Execute the query and assemble the report. */ blob_zero(&rewrittenPathname); while( db_step(&q)==SQLITE_ROW ){ const char *zPathname = db_column_text(&q, 0); const char *zClass = 0; int isManaged = db_column_int(&q, 8); const char *zMtime = db_column_text(&q, 1); int size = db_column_int(&q, 2); int isDeleted = db_column_int(&q, 3); int isChnged = db_column_int(&q, 4); int isNew = isManaged && !db_column_int(&q, 5); int isRenamed = db_column_int(&q, 6); int isLink = db_column_int(&q, 7); char *zFullName = mprintf("%s%s", g.zLocalRoot, zPathname); int isMissing = !file_wd_isfile_or_link(zFullName); /* Determine the file change classification, if any. */ if( isDeleted ){ if( flags & C_DELETED ){ zClass = "DELETED"; } }else if( isMissing ){ if( file_access(zFullName, F_OK)==0 ){ if( flags & C_MISSING ){ zClass = "NOT_A_FILE"; } if( flags & C_FATAL ){ fossil_warning("not a file: %s", zFullName); nErr++; } }else{ if( flags & C_MISSING ){ zClass = "MISSING"; } if( flags & C_FATAL ){ fossil_warning("missing file: %s", zFullName); nErr++; } } }else if( (flags & C_ADDED) && isNew ){ zClass = "ADDED"; }else if( (flags & (C_UPDATED | C_CHANGED)) && isChnged==2 ){ zClass = "UPDATED_BY_MERGE"; }else if( (flags & C_ADDED) && isChnged==3 ){ zClass = "ADDED_BY_MERGE"; }else if( (flags & (C_UPDATED | C_CHANGED)) && isChnged==4 ){ zClass = "UPDATED_BY_INTEGRATE"; }else if( (flags & C_ADDED) && isChnged==5 ){ zClass = "ADDED_BY_INTEGRATE"; }else if( (flags & C_META) && isChnged==6 ){ zClass = "EXECUTABLE"; }else if( (flags & C_META) && isChnged==7 ){ zClass = "SYMLINK"; }else if( (flags & C_META) && isChnged==8 ){ zClass = "UNEXEC"; }else if( (flags & C_META) && isChnged==9 ){ zClass = "UNLINK"; }else if( (flags & C_CONFLICT) && isChnged && !isLink && file_contains_merge_marker(zFullName) ){ zClass = "CONFLICT"; }else if( (flags & (C_EDITED | C_CHANGED)) && isChnged && (isChnged<2 || isChnged>9) ){ zClass = "EDITED"; }else if( (flags & C_RENAMED) && isRenamed ){ zClass = "RENAMED"; }else if( (flags & C_UNCHANGED) && isManaged && !isNew && !isChnged && !isRenamed ){ zClass = "UNCHANGED"; }else if( (flags & C_EXTRA) && !isManaged ){ zClass = "EXTRA"; } /* Only report files for which a change classification was determined. */ if( zClass ){ if( flags & C_COMMENT ){ blob_append(report, "# ", 2); } if( flags & C_CLASSIFY ){ blob_appendf(report, "%-10s ", zClass); } if( flags & C_MTIME ){ blob_append(report, zMtime, -1); blob_append(report, " ", 2); } if( flags & C_SIZE ){ blob_appendf(report, "%7d ", size); } if( flags & C_RELPATH ){ /* If C_RELPATH, display paths relative to current directory. */ const char *zDisplayName; file_relative_name(zFullName, &rewrittenPathname, 0); zDisplayName = blob_str(&rewrittenPathname); if( zDisplayName[0]=='.' && zDisplayName[1]=='/' ){ zDisplayName += 2; /* no unnecessary ./ prefix */ } blob_append(report, zDisplayName, -1); }else{ /* If not C_RELPATH, display paths relative to project root. */ blob_append(report, zPathname, -1); } blob_append(report, "\n", 1); } free(zFullName); } blob_reset(&rewrittenPathname); db_finalize(&q); /* If C_MERGE, put merge contributors at the end of the report. */ skipFiles: if( flags & C_MERGE ){ db_prepare(&q, "SELECT uuid, id FROM vmerge JOIN blob ON merge=rid" " WHERE id<=0"); while( db_step(&q)==SQLITE_ROW ){ if( flags & C_COMMENT ){ blob_append(report, "# ", 2); } if( flags & C_CLASSIFY ){ const char *zClass; switch( db_column_int(&q, 1) ){ case -1: zClass = "CHERRYPICK" ; break; case -2: zClass = "BACKOUT" ; break; case -4: zClass = "INTEGRATE" ; break; default: zClass = "MERGED_WITH"; break; } blob_appendf(report, "%-10s ", zClass); } blob_append(report, db_column_text(&q, 0), -1); blob_append(report, "\n", 1); } db_finalize(&q); } if( nErr ){ fossil_fatal("aborting due to prior errors"); } } /* ** Use the "relative-paths" setting and the --abs-paths and |
︙ | ︙ | |||
169 170 171 172 173 174 175 | int absPathOption = find_option("abs-paths", 0, 0)!=0; int relPathOption = find_option("rel-paths", 0, 0)!=0; if( absPathOption ){ relativePaths = 0; } if( relPathOption ){ relativePaths = 1; } return relativePaths; } | > | | < | | > | < > | < < | > | | > | < < < > > | | < < > > | > > > > > > > | < > > > > > > > > > > > > > > > > > | | | | < > | | | | > | | | > > > > > | | > > > > > > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > > > > > > | > | > > | | > > > | > | > | | > > | < | > > > > > > > > | | > > > > > > > | > > | < < | > | | < < | > > | < < < < > | > > | < < | < < | < | < < < | > | > > > > > > > > > > > > | | | | | < | | | | > | > > > > > > > > > > > > > > > > | > | 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 | int absPathOption = find_option("abs-paths", 0, 0)!=0; int relPathOption = find_option("rel-paths", 0, 0)!=0; if( absPathOption ){ relativePaths = 0; } if( relPathOption ){ relativePaths = 1; } return relativePaths; } /* ** COMMAND: changes ** COMMAND: status ** ** Usage: %fossil changes|status ?OPTIONS? ?PATHS ...? ** ** Report the change status of files in the current checkout. If one or ** more PATHS are specified, only changes among the named files and ** directories are reported. Directories are searched recursively. ** ** The status command is similar to the changes command, except it lacks ** several of the options supported by changes and it has its own header ** and footer information. The header information is a subset of that ** shown by the info command, and the footer shows if there are any forks. ** Change type classification is always enabled for the status command. ** ** Each line of output is the name of a changed file, with paths shown ** according to the "relative-paths" setting, unless overridden by the ** --abs-paths or --rel-paths options. ** ** By default, all changed files are selected for display. This behavior ** can be overridden by using one or more filter options (listed below), ** in which case only files with the specified change type(s) are shown. ** As a special case, the --no-merge option does not inhibit this default. ** This default shows exactly the set of changes that would be checked ** in by the commit command. ** ** If no filter options are used, or if the --merge option is used, the ** SHA1 hash of each merge contributor check-in version is displayed at ** the end of the report. The --no-merge option is useful to display the ** default set of changed files without the merge contributors. ** ** If change type classification is enabled, each output line starts with ** a code describing the file's change type, e.g. EDITED or RENAMED. It ** is enabled by default unless exactly one change type is selected. For ** the purposes of determining the default, --changed counts as selecting ** one change type. The default can be overridden by the --classify or ** --no-classify options. ** ** If both --merge and --no-merge are used, --no-merge has priority. The ** same is true of --classify and --no-classify. ** ** The "fossil changes --extra" command is equivalent to "fossil extras". ** ** --edited and --updated produce disjoint sets. --updated shows a file ** only when it is identical to that of its merge contributor, and the ** change type classification is UPDATED_BY_MERGE or UPDATED_BY_INTEGRATE. ** If the file had to be merged with any other changes, it is considered ** to be merged or conflicted and therefore will be shown by --edited, not ** --updated, with types EDITED or CONFLICT. The --changed option can be ** used to display the union of --edited and --updated. ** ** --differ is so named because it lists all the differences between the ** checked-out version and the checkout directory. In addition to the ** default changes (besides --merge), it lists extra files which (assuming ** ignore-glob is set correctly) may be worth adding. Prior to doing a ** commit, it is good practice to check --differ to see not only which ** changes would be committed but also if any files need to be added. ** ** General options: ** --abs-paths Display absolute pathnames. ** --rel-paths Display pathnames relative to the current working ** directory. ** --sha1sum Verify file status using SHA1 hashing rather than ** relying on file mtimes. ** --case-sensitive <BOOL> Override case-sensitive setting. ** --dotfiles Include unmanaged files beginning with a dot. ** --ignore <CSG> Ignore unmanaged files matching CSG glob patterns. ** ** Options specific to the changes command: ** --header Identify the repository if report is non-empty. ** -v|--verbose Say "(none)" if the change report is empty. ** --classify Start each line with the file's change type. ** --no-classify Do not print file change types. ** ** Filter options: ** --edited Display edited, merged, and conflicted files. ** --updated Display files updated by merge/integrate. ** --changed Combination of the above two options. ** --missing Display missing files. ** --added Display added files. ** --deleted Display deleted files. ** --renamed Display renamed files. ** --conflict Display files having merge conflicts. ** --meta Display files with metadata changes. ** --unchanged Display unchanged files. ** --all Display all managed files, i.e. all of the above. ** --extra Display unmanaged files. ** --differ Display modified and extra files. ** --merge Display merge contributors. ** --no-merge Do not display merge contributors. ** ** See also: extras, ls */ void status_cmd(void){ /* Affirmative and negative flag option tables. */ static const struct { const char *option; /* Flag name. */ unsigned mask; /* Flag bits. */ } flagDefs[] = { {"edited" , C_EDITED }, {"updated" , C_UPDATED }, {"changed" , C_CHANGED }, {"missing" , C_MISSING }, {"added" , C_ADDED }, {"deleted" , C_DELETED }, {"renamed" , C_RENAMED }, {"conflict" , C_CONFLICT }, {"meta" , C_META }, {"unchanged" , C_UNCHANGED}, {"all" , C_ALL }, {"extra" , C_EXTRA }, {"differ" , C_DIFFER }, {"merge" , C_MERGE }, {"classify", C_CLASSIFY}, }, noFlagDefs[] = { {"no-merge", C_MERGE }, {"no-classify", C_CLASSIFY }, }; Blob report = BLOB_INITIALIZER; enum {CHANGES, STATUS} command = *g.argv[1]=='s' ? STATUS : CHANGES; int useSha1sum = find_option("sha1sum", 0, 0)!=0; int showHdr = command==CHANGES && find_option("header", 0, 0); int verboseFlag = command==CHANGES && find_option("verbose", "v", 0); const char *zIgnoreFlag = find_option("ignore", 0, 1); unsigned scanFlags = 0; unsigned flags = 0; int vid, i; /* Load affirmative flag options. */ for( i=0; i<count(flagDefs); ++i ){ if( (command==CHANGES || !(flagDefs[i].mask & C_CLASSIFY)) && find_option(flagDefs[i].option, 0, 0) ){ flags |= flagDefs[i].mask; } } /* If no filter options are specified, enable defaults. */ if( !(flags & C_FILTER) ){ flags |= C_DEFAULT; } /* If more than one filter is enabled, enable classification. This is tricky. * Having one filter means flags masked by C_FILTER is a power of two. If a * number masked by one less than itself is zero, it's either zero or a power * of two. It's already known to not be zero because of the above defaults. * Unlike --all, --changed is a single filter, i.e. it sets only one bit. * Also force classification for the status command. */ if( command==STATUS || (flags & (flags-1) & C_FILTER) ){ flags |= C_CLASSIFY; } /* Negative flag options override defaults applied above. */ for( i=0; i<count(noFlagDefs); ++i ){ if( (command==CHANGES || !(noFlagDefs[i].mask & C_CLASSIFY)) && find_option(noFlagDefs[i].option, 0, 0) ){ flags &= ~noFlagDefs[i].mask; } } /* Confirm current working directory is within checkout. */ db_must_be_within_tree(); /* Get checkout version. l*/ vid = db_lget_int("checkout", 0); /* Relative path flag determination is done by a shared function. */ if( determine_cwd_relative_option() ){ flags |= C_RELPATH; } /* If --ignore is not specified, use the ignore-glob setting. */ if( !zIgnoreFlag ){ zIgnoreFlag = db_get("ignore-glob", 0); } /* Get the --dotfiles argument, or read it from the dotfiles setting. */ if( find_option("dotfiles", 0, 0) || db_get_boolean("dotfiles", 0) ){ scanFlags = SCAN_ALL; } /* We should be done with options. */ verify_all_options(); /* Check for changed files. */ vfile_check_signature(vid, useSha1sum ? CKSIG_SHA1 : 0); /* Search for unmanaged files if requested. */ if( flags & C_EXTRA ){ Glob *pIgnore = glob_create(zIgnoreFlag); locate_unmanaged_files(g.argc-2, g.argv+2, scanFlags, pIgnore); glob_free(pIgnore); } /* The status command prints general information before the change list. */ if( command==STATUS ){ fossil_print("repository: %s\n", db_repository_filename()); fossil_print("local-root: %s\n", g.zLocalRoot); if( g.zConfigDbName ){ fossil_print("config-db: %s\n", g.zConfigDbName); } if( vid ){ show_common_info(vid, "checkout:", 1, 1); } db_record_repository_filename(0); } /* Find and print all requested changes. */ blob_zero(&report); status_report(&report, flags); if( blob_size(&report) ){ if( showHdr ){ fossil_print("Changes for %s at %s:\n", db_get("project-name", "???"), g.zLocalRoot); } blob_write_to_file(&report, "-"); }else if( verboseFlag ){ fossil_print(" (none)\n"); } blob_reset(&report); /* The status command ends with warnings about ambiguous leaves (forks). */ if( command==STATUS ){ leaf_ambiguity_warning(vid, vid); } } /* ** Take care of -r version of ls command */ static void ls_cmd_rev( const char *zRev, /* Revision string given */ |
︙ | ︙ | |||
351 352 353 354 355 356 357 | } db_finalize(&q); } /* ** COMMAND: ls ** | | | < | > | | > > > > > > > > > > > > > | | | | 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 | } db_finalize(&q); } /* ** COMMAND: ls ** ** Usage: %fossil ls ?OPTIONS? ?PATHS ...? ** ** List all files in the current checkout. If PATHS is included, only the ** named files (or their children if directories) are shown. ** ** The ls command is essentially two related commands in one, depending on ** whether or not the -r option is given. -r selects a specific check-in ** version to list, in which case -R can be used to select the repository. ** The fine behavior of the --age, -v, and -t options is altered by the -r ** option as well, as explained below. ** ** The --age option displays file commit times. Like -r, --age has the ** side effect of making -t sort by commit time, not modification time. ** ** The -v option provides extra information about each file. Without -r, ** -v displays the change status, in the manner of the changes command. ** With -r, -v shows the commit time and size of the checked-in files. ** ** The -t option changes the sort order. Without -t, files are sorted by ** path and name (case insensitive sort if -r). If neither --age nor -r ** are used, -t sorts by modification time, otherwise by commit time. ** ** Options: ** --age Show when each file was committed. ** -v|--verbose Provide extra information about each file. ** -t Sort output in time order. ** -r VERSION The specific check-in to list. ** -R|--repository FILE Extract info from repository FILE. ** ** See also: changes, extras, status */ void ls_cmd(void){ int vid; Stmt q; int verboseFlag; |
︙ | ︙ | |||
493 494 495 496 497 498 499 500 | fossil_print("%s%s\n", type, zPathname); } free(zFullName); } db_finalize(&q); } /* | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | | 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 | fossil_print("%s%s\n", type, zPathname); } free(zFullName); } db_finalize(&q); } /* ** COMMAND: extras ** ** Usage: %fossil extras ?OPTIONS? ?PATH1 ...? ** ** Print a list of all files in the source tree that are not part of the ** current checkout. See also the "clean" command. If paths are specified, ** only files in the given directories will be listed. ** ** Files and subdirectories whose names begin with "." are normally |
︙ | ︙ | |||
573 574 575 576 577 578 579 | ** --ignore <CSG> ignore files matching patterns from the argument ** --rel-paths Display pathnames relative to the current working ** directory. ** ** See also: changes, clean, status */ void extras_cmd(void){ | | > < < < > | > > | < < < < < < < < < < < < < < < < < | < > > > < | | < | | | | | 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 | ** --ignore <CSG> ignore files matching patterns from the argument ** --rel-paths Display pathnames relative to the current working ** directory. ** ** See also: changes, clean, status */ void extras_cmd(void){ Blob report = BLOB_INITIALIZER; const char *zIgnoreFlag = find_option("ignore",0,1); unsigned scanFlags = find_option("dotfiles",0,0)!=0 ? SCAN_ALL : 0; unsigned flags = C_EXTRA; int showHdr = find_option("header",0,0)!=0; Glob *pIgnore; if( find_option("temp",0,0)!=0 ) scanFlags |= SCAN_TEMP; db_must_be_within_tree(); if( determine_cwd_relative_option() ){ flags |= C_RELPATH; } if( db_get_boolean("dotfiles", 0) ) scanFlags |= SCAN_ALL; /* We should be done with options.. */ verify_all_options(); if( zIgnoreFlag==0 ){ zIgnoreFlag = db_get("ignore-glob", 0); } pIgnore = glob_create(zIgnoreFlag); locate_unmanaged_files(g.argc-2, g.argv+2, scanFlags, pIgnore); glob_free(pIgnore); g.allowSymlinks = 1; /* Report on symbolic links */ blob_zero(&report); status_report(&report, flags); if( blob_size(&report) ){ if( showHdr ){ fossil_print("Extras for %s at %s:\n", db_get("project-name","???"), g.zLocalRoot); } blob_write_to_file(&report, "-"); } blob_reset(&report); } /* ** COMMAND: clean ** ** Usage: %fossil clean ?OPTIONS? ?PATH ...? ** ** Delete all "extra" files in the source tree. "Extra" files are files ** that are not officially part of the checkout. If one or more PATH ** arguments appear, then only the files named, or files contained with ** directories named, will be removed. ** ** If the --prompt option is used, prompts are issued to confirm the ** permanent removal of each file. Otherwise, files are backed up to the ** undo buffer prior to removal, and prompts are issued only for files ** whose removal cannot be undone due to their large size or due to ** --disable-undo being used. ** ** The --force option treats all prompts as having been answered yes, ** whereas --no-prompt treats them as having been answered no. ** ** Files matching any glob pattern specified by the --clean option are ** deleted without prompting, and the removal cannot be undone. ** ** No file that matches glob patterns specified by --ignore or --keep will ** ever be deleted. Files and subdirectories whose names begin with "." ** are automatically ignored unless the --dotfiles option is used. ** ** The default values for --clean, --ignore, and --keep are determined by ** the (versionable) clean-glob, ignore-glob, and keep-glob settings. ** ** The --verily option ignores the keep-glob and ignore-glob settings and ** turns on --force, --emptydirs, --dotfiles, and --disable-undo. Use the ** --verily option when you really want to clean up everything. Extreme ** care should be exercised when using the --verily option. |
︙ | ︙ | |||
775 776 777 778 779 780 781 | pClean = glob_create(zCleanFlag); nRoot = (int)strlen(g.zLocalRoot); g.allowSymlinks = 1; /* Find symlinks too */ if( !dirsOnlyFlag ){ Stmt q; Blob repo; if( !dryRunFlag && !disableUndo ) undo_begin(); | | | | | | > | 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 | pClean = glob_create(zCleanFlag); nRoot = (int)strlen(g.zLocalRoot); g.allowSymlinks = 1; /* Find symlinks too */ if( !dirsOnlyFlag ){ Stmt q; Blob repo; if( !dryRunFlag && !disableUndo ) undo_begin(); locate_unmanaged_files(g.argc-2, g.argv+2, scanFlags, pIgnore); db_prepare(&q, "SELECT %Q || pathname FROM sfile" " WHERE pathname NOT IN (%s)" " ORDER BY 1", g.zLocalRoot, fossil_all_reserved_names(0) ); if( file_tree_name(g.zRepositoryName, &repo, 0, 0) ){ db_multi_exec("DELETE FROM sfile WHERE pathname=%B", &repo); } db_multi_exec("DELETE FROM sfile WHERE pathname IN" " (SELECT pathname FROM vfile)"); while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); if( glob_match(pKeep, zName+nRoot) ){ if( verboseFlag ){ fossil_print("KEPT file \"%s\" not removed (due to --keep" " or \"keep-glob\")\n", zName+nRoot); } |
︙ | ︙ | |||
1066 1067 1068 1069 1070 1071 1072 | blob_appendf(&prompt, "%s%s", p->azTag[i], p->azTag[i+1] ? ", " : ""); } } blob_appendf(&prompt, "\n#\n"); } } | | | 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 | blob_appendf(&prompt, "%s%s", p->azTag[i], p->azTag[i+1] ? ", " : ""); } } blob_appendf(&prompt, "\n#\n"); } } status_report(&prompt, C_DEFAULT | C_FATAL | C_COMMENT); if( g.markPrivate ){ blob_append(&prompt, "# PRIVATE BRANCH: This check-in will be private and will not sync to\n" "# repositories.\n" "#\n", -1 ); } |
︙ | ︙ | |||
1464 1465 1466 1467 1468 1469 1470 | ** is seen in a text file. ** ** Return 1 if the user pressed 'c'. In that case, the file will have ** been converted to UTF-8 (if it was UTF-16) with LF line-endings, ** and the original file will have been renamed to "<filename>-original". */ static int commit_warning( | | | | | > | > | | | | | | | 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 | ** is seen in a text file. ** ** Return 1 if the user pressed 'c'. In that case, the file will have ** been converted to UTF-8 (if it was UTF-16) with LF line-endings, ** and the original file will have been renamed to "<filename>-original". */ static int commit_warning( Blob *pContent, /* The content of the file being committed. */ int crlfOk, /* Non-zero if CR/LF warnings should be disabled. */ int binOk, /* Non-zero if binary warnings should be disabled. */ int encodingOk, /* Non-zero if encoding warnings should be disabled. */ int noPrompt, /* 0 to always prompt, 1 for 'N', 2 for 'Y'. */ const char *zFilename, /* The full name of the file being committed. */ Blob *pReason /* Reason for warning, if any (non-fatal only). */ ){ int bReverse; /* UTF-16 byte order is reversed? */ int fUnicode; /* return value of could_be_utf16() */ int fBinary; /* does the blob content appear to be binary? */ int lookFlags; /* output flags from looks_like_utf8/utf16() */ int fHasAnyCr; /* the blob contains one or more CR chars */ int fHasLoneCrOnly; /* all detected line endings are CR only */ int fHasCrLfOnly; /* all detected line endings are CR/LF pairs */ int fHasInvalidUtf8 = 0;/* contains invalid UTF-8 */ char *zMsg; /* Warning message */ Blob fname; /* Relative pathname of the file */ static int allOk = 0; /* Set to true to disable this routine */ if( allOk ) return 0; fUnicode = could_be_utf16(pContent, &bReverse); if( fUnicode ){ lookFlags = looks_like_utf16(pContent, bReverse, LOOK_NUL); }else{ lookFlags = looks_like_utf8(pContent, LOOK_NUL); if( !(lookFlags & LOOK_BINARY) && invalid_utf8(pContent) ){ fHasInvalidUtf8 = 1; } } fHasAnyCr = (lookFlags & LOOK_CR); fBinary = (lookFlags & LOOK_BINARY); fHasLoneCrOnly = ((lookFlags & LOOK_EOL) == LOOK_LONE_CR); fHasCrLfOnly = ((lookFlags & LOOK_EOL) == LOOK_CRLF); if( fUnicode || fHasAnyCr || fBinary || fHasInvalidUtf8 ){ const char *zWarning; const char *zDisable; const char *zConvert = "c=convert/"; Blob ans; char cReply; if( fBinary ){ int fHasNul = (lookFlags & LOOK_NUL); /* contains NUL chars? */ int fHasLong = (lookFlags & LOOK_LONG); /* overly long line? */ if( binOk ){ return 0; /* We don't want binary warnings for this file. */ } if( !fHasNul && fHasLong ){ zWarning = "long lines"; zConvert = ""; /* We cannot convert overlong lines. */ }else{ zWarning = "binary data"; zConvert = ""; /* We cannot convert binary files. */ } zDisable = "\"binary-glob\" setting"; }else if( fUnicode && fHasAnyCr ){ if( crlfOk && encodingOk ){ |
︙ | ︙ | |||
1560 1561 1562 1563 1564 1565 1566 | } file_relative_name(zFilename, &fname, 0); zMsg = mprintf( "%s contains %s. Use --no-warnings or the %s to" " disable this warning.\n" "Commit anyhow (a=all/%sy/N)? ", blob_str(&fname), zWarning, zDisable, zConvert); | > | < | > > > > > > > | | | | | > > > | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 | } file_relative_name(zFilename, &fname, 0); zMsg = mprintf( "%s contains %s. Use --no-warnings or the %s to" " disable this warning.\n" "Commit anyhow (a=all/%sy/N)? ", blob_str(&fname), zWarning, zDisable, zConvert); if( noPrompt==0 ){ prompt_user(zMsg, &ans); cReply = blob_str(&ans)[0]; blob_reset(&ans); }else if( noPrompt==2 ){ cReply = 'Y'; }else{ cReply = 'N'; } fossil_free(zMsg); if( cReply=='a' || cReply=='A' ){ allOk = 1; }else if( *zConvert && (cReply=='c' || cReply=='C') ){ char *zOrig = file_newname(zFilename, "original", 1); FILE *f; blob_write_to_file(pContent, zOrig); fossil_free(zOrig); f = fossil_fopen(zFilename, "wb"); if( f==0 ){ fossil_warning("cannot open %s for writing", zFilename); }else{ if( fUnicode ){ int bomSize; const unsigned char *bom = get_utf8_bom(&bomSize); fwrite(bom, 1, bomSize, f); blob_to_utf8_no_bom(pContent, 0); }else if( fHasInvalidUtf8 ){ blob_cp1252_to_utf8(pContent); } if( fHasAnyCr ){ blob_to_lf_only(pContent); } fwrite(blob_buffer(pContent), 1, blob_size(pContent), f); fclose(f); } return 1; }else if( cReply!='y' && cReply!='Y' ){ fossil_fatal("Abandoning commit due to %s in %s", zWarning, blob_str(&fname)); }else if( noPrompt==2 ){ if( pReason ){ blob_append(pReason, zWarning, -1); } return 1; } blob_reset(&fname); } return 0; } /* ** COMMAND: test-commit-warning ** ** Usage: %fossil test-commit-warning ?OPTIONS? ** ** Check each file in the checkout, including unmodified ones, using all ** the pre-commit checks. ** ** Options: ** --no-settings Do not consider any glob settings. ** -v|--verbose Show per-file results for all pre-commit checks. ** ** See also: commit, extras */ void test_commit_warning(void){ int rc = 0; int noSettings; int verboseFlag; Stmt q; noSettings = find_option("no-settings",0,0)!=0; verboseFlag = find_option("verbose","v",0)!=0; verify_all_options(); db_must_be_within_tree(); db_prepare(&q, "SELECT %Q || pathname, pathname, %s, %s, %s FROM vfile" " WHERE NOT deleted", g.zLocalRoot, glob_expr("pathname", noSettings ? 0 : db_get("crnl-glob","")), glob_expr("pathname", noSettings ? 0 : db_get("binary-glob","")), glob_expr("pathname", noSettings ? 0 : db_get("encoding-glob","")) ); while( db_step(&q)==SQLITE_ROW ){ const char *zFullname; const char *zName; Blob content; Blob reason; int crnlOk, binOk, encodingOk; int fileRc; zFullname = db_column_text(&q, 0); zName = db_column_text(&q, 1); crnlOk = db_column_int(&q, 2); binOk = db_column_int(&q, 3); encodingOk = db_column_int(&q, 4); blob_zero(&content); if( file_wd_islink(zFullname) ){ blob_read_link(&content, zFullname); }else{ blob_read_from_file(&content, zFullname); } blob_zero(&reason); fileRc = commit_warning(&content, crnlOk, binOk, encodingOk, 2, zFullname, &reason); if( fileRc || verboseFlag ){ fossil_print("%d\t%s\t%s\n", fileRc, zName, blob_str(&reason)); } blob_reset(&reason); rc |= fileRc; } db_finalize(&q); fossil_print("%d\n", rc); } /* ** qsort() comparison routine for an array of pointers to strings. */ static int tagCmp(const void *a, const void *b){ char **pA = (char**)a; char **pB = (char**)b; |
︙ | ︙ | |||
1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 | ** --close close the branch being committed ** --delta use a delta manifest in the commit process ** --integrate close all merged-in branches ** -m|--comment COMMENT-TEXT use COMMENT-TEXT as commit comment ** -M|--message-file FILE read the commit comment from given file ** --mimetype MIMETYPE mimetype of check-in comment ** -n|--dry-run If given, display instead of run actions ** --no-warnings omit all warnings about file contents ** --nosign do not attempt to sign this commit with gpg ** --private do not sync changes and their descendants ** --sha1sum verify file status using SHA1 hashing rather ** than relying on file mtimes ** --tag TAG-NAME assign given tag TAG-NAME to the check-in | > > > | > > > > > > > | 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 | ** --close close the branch being committed ** --delta use a delta manifest in the commit process ** --integrate close all merged-in branches ** -m|--comment COMMENT-TEXT use COMMENT-TEXT as commit comment ** -M|--message-file FILE read the commit comment from given file ** --mimetype MIMETYPE mimetype of check-in comment ** -n|--dry-run If given, display instead of run actions ** --no-prompt This option disables prompting the user for ** input and assumes an answer of 'No' for every ** question. ** --no-warnings omit all warnings about file contents ** --nosign do not attempt to sign this commit with gpg ** --private do not sync changes and their descendants ** --sha1sum verify file status using SHA1 hashing rather ** than relying on file mtimes ** --tag TAG-NAME assign given tag TAG-NAME to the check-in ** --date-override DATETIME DATE to use instead of 'now' ** --user-override USER USER to use instead of the current default ** ** DATETIME may be "now" or "YYYY-MM-DDTHH:MM:SS.SSS". If in ** year-month-day form, it may be truncated, the "T" may be replaced by ** a space, and it may also name a timezone offset from UTC as "-HH:MM" ** (westward) or "+HH:MM" (eastward). Either no timezone suffix or "Z" ** means UTC. ** ** See also: branch, changes, checkout, extras, sync */ void commit_cmd(void){ int hasChanges; /* True if unsaved changes exist */ int vid; /* blob-id of parent version */ int nrid; /* blob-id of a modified file */ int nvid; /* Blob-id of the new check-in */ Blob comment; /* Check-in comment */ const char *zComment; /* Check-in comment */ Stmt q; /* Various queries */ char *zUuid; /* UUID of the new check-in */ int useSha1sum = 0; /* True to verify file status using SHA1 hashing */ int noSign = 0; /* True to omit signing the manifest using GPG */ int isAMerge = 0; /* True if checking in a merge */ int noWarningFlag = 0; /* True if skipping all warnings */ int noPrompt = 0; /* True if skipping all prompts */ int forceFlag = 0; /* Undocumented: Disables all checks */ int forceDelta = 0; /* Force a delta-manifest */ int forceBaseline = 0; /* Force a baseline-manifest */ int allowConflict = 0; /* Allow unresolve merge conflicts */ int allowEmpty = 0; /* Allow a commit with no changes */ int allowFork = 0; /* Allow the commit to fork */ int allowOlder = 0; /* Allow a commit older than its ancestor */ |
︙ | ︙ | |||
1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 | } zComment = find_option("comment","m",1); forceFlag = find_option("force", "f", 0)!=0; allowConflict = find_option("allow-conflict",0,0)!=0; allowEmpty = find_option("allow-empty",0,0)!=0; allowFork = find_option("allow-fork",0,0)!=0; allowOlder = find_option("allow-older",0,0)!=0; noWarningFlag = find_option("no-warnings", 0, 0)!=0; sCiInfo.zBranch = find_option("branch","b",1); sCiInfo.zColor = find_option("bgcolor",0,1); sCiInfo.zBrClr = find_option("branchcolor",0,1); sCiInfo.closeFlag = find_option("close",0,0)!=0; sCiInfo.integrateFlag = find_option("integrate",0,0)!=0; sCiInfo.zMimetype = find_option("mimetype",0,1); | > | 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 | } zComment = find_option("comment","m",1); forceFlag = find_option("force", "f", 0)!=0; allowConflict = find_option("allow-conflict",0,0)!=0; allowEmpty = find_option("allow-empty",0,0)!=0; allowFork = find_option("allow-fork",0,0)!=0; allowOlder = find_option("allow-older",0,0)!=0; noPrompt = find_option("no-prompt", 0, 0)!=0; noWarningFlag = find_option("no-warnings", 0, 0)!=0; sCiInfo.zBranch = find_option("branch","b",1); sCiInfo.zColor = find_option("bgcolor",0,1); sCiInfo.zBrClr = find_option("branchcolor",0,1); sCiInfo.closeFlag = find_option("close",0,0)!=0; sCiInfo.integrateFlag = find_option("integrate",0,0)!=0; sCiInfo.zMimetype = find_option("mimetype",0,1); |
︙ | ︙ | |||
1780 1781 1782 1783 1784 1785 1786 | } sCiInfo.zDateOvrd = find_option("date-override",0,1); sCiInfo.zUserOvrd = find_option("user-override",0,1); db_must_be_within_tree(); noSign = db_get_boolean("omitsign", 0)|noSign; if( db_get_boolean("clearsign", 0)==0 ){ noSign = 1; } useCksum = db_get_boolean("repo-cksum", 1); | | | 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 | } sCiInfo.zDateOvrd = find_option("date-override",0,1); sCiInfo.zUserOvrd = find_option("user-override",0,1); db_must_be_within_tree(); noSign = db_get_boolean("omitsign", 0)|noSign; if( db_get_boolean("clearsign", 0)==0 ){ noSign = 1; } useCksum = db_get_boolean("repo-cksum", 1); outputManifest = db_get_manifest_setting(); verify_all_options(); /* Escape special characters in tags and put all tags in sorted order */ if( nTag ){ int i; for(i=0; i<nTag; i++) sCiInfo.azTag[i] = mprintf("%F", sCiInfo.azTag[i]); qsort((void*)sCiInfo.azTag, nTag, sizeof(sCiInfo.azTag[0]), tagCmp); |
︙ | ︙ | |||
1812 1813 1814 1815 1816 1817 1818 | g.markPrivate = 1; } /* ** Autosync if autosync is enabled and this is not a private check-in. */ if( !g.markPrivate ){ | | < < < | < > | | > > > > > > | | > > > > | > > | 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 | g.markPrivate = 1; } /* ** Autosync if autosync is enabled and this is not a private check-in. */ if( !g.markPrivate ){ if( autosync_loop(SYNC_PULL, db_get_int("autosync-tries", 1), 1) ){ fossil_exit(1); } } /* Require confirmation to continue with the check-in if there is ** clock skew */ if( g.clockSkewSeen ){ if( !noPrompt ){ prompt_user("continue in spite of time skew (y/N)? ", &ans); cReply = blob_str(&ans)[0]; blob_reset(&ans); }else{ fossil_print("Abandoning commit due to time skew\n"); cReply = 'N'; } if( cReply!='y' && cReply!='Y' ){ fossil_exit(1); } } /* There are two ways this command may be executed. If there are ** no arguments following the word "commit", then all modified files ** in the checked out directory are committed. If one or more arguments ** follows "commit", then only those files are committed. ** ** After the following function call has returned, the Global.aCommitFile[] ** array is allocated to contain the "id" field from the vfile table ** for each file to be committed. Or, if aCommitFile is NULL, all files ** should be committed. */ if( select_commit_files() ){ if( !noPrompt ){ prompt_user("continue (y/N)? ", &ans); cReply = blob_str(&ans)[0]; blob_reset(&ans); }else{ cReply = 'N'; } if( cReply!='y' && cReply!='Y' ){ fossil_exit(1); } } isAMerge = db_exists("SELECT 1 FROM vmerge WHERE id=0 OR id<-2"); if( g.aCommitFile && isAMerge ){ fossil_fatal("cannot do a partial commit of a merge"); } /* Doing "fossil mv fileA fileB; fossil add fileA; fossil commit fileA" |
︙ | ︙ | |||
1950 1951 1952 1953 1954 1955 1956 | if( zComment ){ blob_zero(&comment); blob_append(&comment, zComment, -1); }else if( zComFile ){ blob_zero(&comment); blob_read_from_file(&comment, zComFile); blob_to_utf8_no_bom(&comment, 1); | | | > | > > > | | > > > > > | 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 | if( zComment ){ blob_zero(&comment); blob_append(&comment, zComment, -1); }else if( zComFile ){ blob_zero(&comment); blob_read_from_file(&comment, zComFile); blob_to_utf8_no_bom(&comment, 1); }else if( dryRunFlag ){ blob_zero(&comment); }else if( !noPrompt ){ char *zInit = db_text(0, "SELECT value FROM vvar WHERE name='ci-comment'"); prepare_commit_comment(&comment, zInit, &sCiInfo, vid); if( zInit && zInit[0] && fossil_strcmp(zInit, blob_str(&comment))==0 ){ prompt_user("unchanged check-in comment. continue (y/N)? ", &ans); cReply = blob_str(&ans)[0]; blob_reset(&ans); if( cReply!='y' && cReply!='Y' ){ fossil_exit(1); } } free(zInit); } if( blob_size(&comment)==0 ){ if( !dryRunFlag ){ if( !noPrompt ){ prompt_user("empty check-in comment. continue (y/N)? ", &ans); cReply = blob_str(&ans)[0]; blob_reset(&ans); }else{ fossil_print("Abandoning commit due to empty check-in comment\n"); cReply = 'N'; } if( cReply!='y' && cReply!='Y' ){ fossil_exit(1); } } }else{ db_multi_exec("REPLACE INTO vvar VALUES('ci-comment',%B)", &comment); db_end_transaction(0); |
︙ | ︙ | |||
2019 2020 2021 2022 2023 2024 2025 | blob_read_link(&content, zFullname); }else{ blob_read_from_file(&content, zFullname); } /* Do not emit any warnings when they are disabled. */ if( !noWarningFlag ){ abortCommit |= commit_warning(&content, crlfOk, binOk, | | > | 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 | blob_read_link(&content, zFullname); }else{ blob_read_from_file(&content, zFullname); } /* Do not emit any warnings when they are disabled. */ if( !noWarningFlag ){ abortCommit |= commit_warning(&content, crlfOk, binOk, encodingOk, noPrompt, zFullname, 0); } if( contains_merge_marker(&content) ){ Blob fname; /* Relative pathname of the file */ nConflict++; file_relative_name(zFullname, &fname, 0); fossil_print("possible unresolved merge conflict in %s\n", |
︙ | ︙ | |||
2103 2104 2105 2106 2107 2108 2109 | blob_reset(&delta); } }else if( forceDelta ){ fossil_fatal("unable to find a baseline-manifest for the delta"); } } if( !noSign && !g.markPrivate && clearsign(&manifest, &manifest) ){ | > | | > > > > > | | | | 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 | blob_reset(&delta); } }else if( forceDelta ){ fossil_fatal("unable to find a baseline-manifest for the delta"); } } if( !noSign && !g.markPrivate && clearsign(&manifest, &manifest) ){ if( !noPrompt ){ prompt_user("unable to sign manifest. continue (y/N)? ", &ans); cReply = blob_str(&ans)[0]; blob_reset(&ans); }else{ fossil_print("Abandoning commit due to manifest signing failure\n"); cReply = 'N'; } if( cReply!='y' && cReply!='Y' ){ fossil_exit(1); } } /* If the -n|--dry-run option is specified, output the manifest file ** and rollback the transaction. */ if( dryRunFlag ){ blob_write_to_file(&manifest, ""); } if( outputManifest & MFESTFLG_RAW ){ zManifestFile = mprintf("%smanifest", g.zLocalRoot); blob_write_to_file(&manifest, zManifestFile); blob_reset(&manifest); blob_read_from_file(&manifest, zManifestFile); free(zManifestFile); } nvid = content_put(&manifest); if( nvid==0 ){ fossil_fatal("trouble committing manifest: %s", g.zErrMsg); } db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d)", nvid); if( manifest_crosslink(nvid, &manifest, dryRunFlag ? MC_NONE : MC_PERMIT_HOOKS)==0 ){ fossil_fatal("%s", g.zErrMsg); } assert( blob_is_reset(&manifest) ); content_deltify(vid, nvid, 0); zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", nvid); db_prepare(&q, "SELECT uuid,merge FROM vmerge JOIN blob ON merge=rid" " WHERE id=-4"); while( db_step(&q)==SQLITE_ROW ){ const char *zIntegrateUuid = db_column_text(&q, 0); if( is_a_leaf(db_column_int(&q, 1)) ){ fossil_print("Closed: %s\n", zIntegrateUuid); }else{ fossil_print("Not_Closed: %s (not a leaf any more)\n", zIntegrateUuid); } } db_finalize(&q); fossil_print("New_Version: %s\n", zUuid); if( outputManifest & MFESTFLG_UUID ){ zManifestFile = mprintf("%smanifest.uuid", g.zLocalRoot); blob_zero(&muuid); blob_appendf(&muuid, "%s\n", zUuid); blob_write_to_file(&muuid, zManifestFile); free(zManifestFile); blob_reset(&muuid); } |
︙ | ︙ | |||
2225 2226 2227 2228 2229 2230 2231 | } /* Clear the undo/redo stack */ undo_reset(); /* Commit */ db_multi_exec("DELETE FROM vvar WHERE name='ci-comment'"); | | | > > > > > > > > > > | | 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 | } /* Clear the undo/redo stack */ undo_reset(); /* Commit */ db_multi_exec("DELETE FROM vvar WHERE name='ci-comment'"); db_multi_exec("PRAGMA repository.application_id=252006673;"); db_multi_exec("PRAGMA localdb.application_id=252006674;"); if( dryRunFlag ){ db_end_transaction(1); exit(1); } db_end_transaction(0); if( outputManifest & MFESTFLG_TAGS ){ Blob tagslist; zManifestFile = mprintf("%smanifest.tags", g.zLocalRoot); blob_zero(&tagslist); get_checkin_taglist(nvid, &tagslist); blob_write_to_file(&tagslist, zManifestFile); blob_reset(&tagslist); free(zManifestFile); } if( !g.markPrivate ){ autosync_loop(SYNC_PUSH|SYNC_PULL, db_get_int("autosync-tries", 1), 0); } if( count_nonbranch_children(vid)>1 ){ fossil_print("**** warning: a fork has occurred *****\n"); } } |
Changes to src/checkout.c.
︙ | ︙ | |||
125 126 127 128 129 130 131 132 133 134 135 136 137 | } /* ** If the "manifest" setting is true, then automatically generate ** files named "manifest" and "manifest.uuid" containing, respectively, ** the text of the manifest and the artifact ID of the manifest. */ void manifest_to_disk(int vid){ char *zManFile; Blob manifest; Blob hash; | > > > > > > | > < < > | | < | < > > > > > > > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 | } /* ** If the "manifest" setting is true, then automatically generate ** files named "manifest" and "manifest.uuid" containing, respectively, ** the text of the manifest and the artifact ID of the manifest. ** If the manifest setting is set, but is not a boolean value, then treat ** each character as a flag to enable writing "manifest", "manifest.uuid" or ** "manifest.tags". */ void manifest_to_disk(int vid){ char *zManFile; Blob manifest; Blob hash; Blob taglist; int flg; flg = db_get_manifest_setting(); if( flg & (MFESTFLG_RAW|MFESTFLG_UUID) ){ blob_zero(&manifest); content_get(vid, &manifest); blob_zero(&hash); sha1sum_blob(&manifest, &hash); sterilize_manifest(&manifest); } if( flg & MFESTFLG_RAW ){ zManFile = mprintf("%smanifest", g.zLocalRoot); blob_write_to_file(&manifest, zManFile); free(zManFile); }else{ if( !db_exists("SELECT 1 FROM vfile WHERE pathname='manifest'") ){ zManFile = mprintf("%smanifest", g.zLocalRoot); file_delete(zManFile); free(zManFile); } } if( flg & MFESTFLG_UUID ){ zManFile = mprintf("%smanifest.uuid", g.zLocalRoot); blob_append(&hash, "\n", 1); blob_write_to_file(&hash, zManFile); free(zManFile); blob_reset(&hash); }else{ if( !db_exists("SELECT 1 FROM vfile WHERE pathname='manifest.uuid'") ){ zManFile = mprintf("%smanifest.uuid", g.zLocalRoot); file_delete(zManFile); free(zManFile); } } if( flg & MFESTFLG_TAGS ){ blob_zero(&taglist); zManFile = mprintf("%smanifest.tags", g.zLocalRoot); get_checkin_taglist(vid, &taglist); blob_write_to_file(&taglist, zManFile); free(zManFile); blob_reset(&taglist); }else{ if( !db_exists("SELECT 1 FROM vfile WHERE pathname='manifest.tags'") ){ zManFile = mprintf("%smanifest.tags", g.zLocalRoot); file_delete(zManFile); free(zManFile); } } } /* ** Find the branch name and all symbolic tags for a particular check-in ** identified by "rid". ** ** The branch name is actually only extracted if this procedure is run ** from within a local check-out. And the branch name is not the branch ** name for "rid" but rather the branch name for the current check-out. ** It is unclear if the rid parameter is always the same as the current ** check-out. */ void get_checkin_taglist(int rid, Blob *pOut){ Stmt stmt; char *zCurrent; blob_reset(pOut); zCurrent = db_text(0, "SELECT value FROM tagxref" " WHERE rid=%d AND tagid=%d", rid, TAG_BRANCH); blob_appendf(pOut, "branch %s\n", zCurrent); db_prepare(&stmt, "SELECT substr(tagname, 5)" " FROM tagxref, tag" " WHERE tagxref.rid=%d" " AND tagxref.tagtype>0" " AND tag.tagid=tagxref.tagid" " AND tag.tagname GLOB 'sym-*'", rid); while( db_step(&stmt)==SQLITE_ROW ){ const char *zName; zName = db_column_text(&stmt, 0); blob_appendf(pOut, "tag %s\n", zName); } db_reset(&stmt); db_finalize(&stmt); } /* ** COMMAND: checkout* ** COMMAND: co* ** ** Usage: %fossil checkout ?VERSION | --latest? ?OPTIONS? ** or: %fossil co ?VERSION | --latest? ?OPTIONS? |
︙ | ︙ | |||
304 305 306 307 308 309 310 | verify_all_options(); if( !forceFlag && unsaved_changes(0) ){ fossil_fatal("there are unsaved changes in the current checkout"); } if( !forceFlag && db_table_exists("localdb","stash") | | | 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 | verify_all_options(); if( !forceFlag && unsaved_changes(0) ){ fossil_fatal("there are unsaved changes in the current checkout"); } if( !forceFlag && db_table_exists("localdb","stash") && db_exists("SELECT 1 FROM localdb.stash") ){ fossil_fatal("closing the checkout will delete your stash"); } if( db_is_writeable("repository") ){ char *zUnset = mprintf("ckout:%q", g.zLocalRoot); db_unset(zUnset, 1); fossil_free(zUnset); } unlink_local_database(1); db_close(1); unlink_local_database(0); } |
Changes to src/clone.c.
︙ | ︙ | |||
97 98 99 100 101 102 103 | ** ** Filesystem: ** [file://]path/to/repo.fossil ** ** Note 1: For ssh and filesystem, path must have an extra leading ** '/' to use an absolute path. ** | | > | | > | 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 | ** ** Filesystem: ** [file://]path/to/repo.fossil ** ** Note 1: For ssh and filesystem, path must have an extra leading ** '/' to use an absolute path. ** ** Note 2: Use %HH escapes for special characters in the userid and ** password. For example "%40" in place of "@", "%2f" in place ** of "/", and "%3a" in place of ":". ** ** By default, your current login name is used to create the default ** admin user. This can be overridden using the -A|--admin-user ** parameter. ** ** Options: ** --admin-user|-A USERNAME Make USERNAME the administrator ** --once Don't remember the URI. ** --private Also clone private branches ** --ssl-identity FILENAME Use the SSL identity if requested by the server ** --ssh-command|-c SSH Use SSH as the "ssh" command ** --httpauth|-B USER:PASS Add HTTP Basic Authorization to requests ** -u|--unversioned Also sync unversioned content ** -v|--verbose Show more statistics in output ** ** See also: init */ void clone_cmd(void){ char *zPassword; const char *zDefaultUser; /* Optional name of the default user */ const char *zHttpAuth; /* HTTP Authorization user:pass information */ int nErr = 0; int urlFlags = URL_PROMPT_PW | URL_REMEMBER; int syncFlags = SYNC_CLONE; /* Also clone private branches */ if( find_option("private",0,0)!=0 ) syncFlags |= SYNC_PRIVATE; if( find_option("once",0,0)!=0) urlFlags &= ~URL_REMEMBER; if( find_option("verbose","v",0)!=0) syncFlags |= SYNC_VERBOSE; if( find_option("unversioned","u",0)!=0 ) syncFlags |= SYNC_UNVERSIONED; zHttpAuth = find_option("httpauth","B",1); zDefaultUser = find_option("admin-user","A",1); clone_ssh_find_options(); url_proxy_options(); /* We should be done with options.. */ verify_all_options(); |
︙ | ︙ |
Changes to src/codecheck1.c.
︙ | ︙ | |||
247 248 249 250 251 252 253 | /* ** A list of functions that return strings that are safe to insert into ** SQL using %s. */ static const char *azSafeFunc[] = { "filename_collation", | < | 247 248 249 250 251 252 253 254 255 256 257 258 259 260 | /* ** A list of functions that return strings that are safe to insert into ** SQL using %s. */ static const char *azSafeFunc[] = { "filename_collation", "leaf_is_closed_sql", "timeline_query_for_www", "timeline_query_for_tty", "blob_sql_text", "glob_expr", "fossil_all_reserved_names", "configure_inop_rhs", |
︙ | ︙ |
Changes to src/config.h.
︙ | ︙ | |||
97 98 99 100 101 102 103 104 105 106 107 108 109 110 | # else # if defined(COMPILER_VERSION) && !defined(NO_COMPILER_VERSION) # define COMPILER_NAME "pellesc32-" COMPILER_VERSION # else # define COMPILER_NAME "pellesc32" # endif # endif # elif defined(_MSC_VER) # if !defined(COMPILER_VERSION) # define COMPILER_VERSION COMPILER_STRINGIFY(_MSC_VER) # endif # if defined(COMPILER_VERSION) && !defined(NO_COMPILER_VERSION) # define COMPILER_NAME "msc-" COMPILER_VERSION # else | > > > > > > > > > > > | 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 | # else # if defined(COMPILER_VERSION) && !defined(NO_COMPILER_VERSION) # define COMPILER_NAME "pellesc32-" COMPILER_VERSION # else # define COMPILER_NAME "pellesc32" # endif # endif # elif defined(__clang__) # if !defined(COMPILER_VERSION) # if defined(__clang_version__) # define COMPILER_VERSION __clang_version__ # endif # endif # if defined(COMPILER_VERSION) && !defined(NO_COMPILER_VERSION) # define COMPILER_NAME "clang-" COMPILER_VERSION # else # define COMPILER_NAME "clang" # endif # elif defined(_MSC_VER) # if !defined(COMPILER_VERSION) # define COMPILER_VERSION COMPILER_STRINGIFY(_MSC_VER) # endif # if defined(COMPILER_VERSION) && !defined(NO_COMPILER_VERSION) # define COMPILER_NAME "msc-" COMPILER_VERSION # else |
︙ | ︙ | |||
135 136 137 138 139 140 141 | # endif # endif # if defined(COMPILER_VERSION) && !defined(NO_COMPILER_VERSION) # define COMPILER_NAME "mingw32-" COMPILER_VERSION # else # define COMPILER_NAME "mingw32" # endif | < < > > | 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 | # endif # endif # if defined(COMPILER_VERSION) && !defined(NO_COMPILER_VERSION) # define COMPILER_NAME "mingw32-" COMPILER_VERSION # else # define COMPILER_NAME "mingw32" # endif # elif defined(__GNUC__) # if !defined(COMPILER_VERSION) # if defined(__VERSION__) # define COMPILER_VERSION __VERSION__ # endif # endif # if defined(COMPILER_VERSION) && !defined(NO_COMPILER_VERSION) # define COMPILER_NAME "gcc-" COMPILER_VERSION # else # define COMPILER_NAME "gcc" # endif # elif defined(_WIN32) # define COMPILER_NAME "win32" # else # define COMPILER_NAME "unknown" # endif #endif #if !defined(_RC_COMPILE_) && !defined(SQLITE_AMALGAMATION) |
︙ | ︙ | |||
214 215 216 217 218 219 220 221 | */ #if defined(__GNUC__) || defined(__clang__) # define NORETURN __attribute__((__noreturn__)) #else # define NORETURN #endif #endif /* _RC_COMPILE_ */ | > > > > > | 225 226 227 228 229 230 231 232 233 234 235 236 237 | */ #if defined(__GNUC__) || defined(__clang__) # define NORETURN __attribute__((__noreturn__)) #else # define NORETURN #endif /* ** Number of elements in an array */ #define count(X) (sizeof(X)/sizeof(X[0])) #endif /* _RC_COMPILE_ */ |
Changes to src/configure.c.
︙ | ︙ | |||
125 126 127 128 129 130 131 132 133 134 135 136 137 138 | { "keep-glob", CONFIGSET_PROJ }, { "crlf-glob", CONFIGSET_PROJ }, { "crnl-glob", CONFIGSET_PROJ }, { "encoding-glob", CONFIGSET_PROJ }, { "empty-dirs", CONFIGSET_PROJ }, { "allow-symlinks", CONFIGSET_PROJ }, { "dotfiles", CONFIGSET_PROJ }, #ifdef FOSSIL_ENABLE_LEGACY_MV_RM { "mv-rm-files", CONFIGSET_PROJ }, #endif { "ticket-table", CONFIGSET_TKT }, { "ticket-common", CONFIGSET_TKT }, | > > | 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 | { "keep-glob", CONFIGSET_PROJ }, { "crlf-glob", CONFIGSET_PROJ }, { "crnl-glob", CONFIGSET_PROJ }, { "encoding-glob", CONFIGSET_PROJ }, { "empty-dirs", CONFIGSET_PROJ }, { "allow-symlinks", CONFIGSET_PROJ }, { "dotfiles", CONFIGSET_PROJ }, { "parent-project-code", CONFIGSET_PROJ }, { "parent-project-name", CONFIGSET_PROJ }, #ifdef FOSSIL_ENABLE_LEGACY_MV_RM { "mv-rm-files", CONFIGSET_PROJ }, #endif { "ticket-table", CONFIGSET_TKT }, { "ticket-common", CONFIGSET_TKT }, |
︙ | ︙ |
Changes to src/content.c.
︙ | ︙ | |||
450 451 452 453 454 455 456 457 458 459 460 461 462 463 | /* ** Turn dephantomization processing on or off. */ void content_enable_dephantomize(int onoff){ ignoreDephantomizations = !onoff; } /* ** Write content into the database. Return the record ID. If the ** content is already in the database, just return the record ID. ** ** If srcId is specified, then pBlob is delta content from ** the srcId record. srcId might be a phantom. | > > > > > > > > > > > > > > > > > > > > | 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 | /* ** Turn dephantomization processing on or off. */ void content_enable_dephantomize(int onoff){ ignoreDephantomizations = !onoff; } /* ** Make sure the g.rcvid global variable has been initialized. ** ** If the g.zIpAddr variable has not been set when this routine is ** called, use zSrc as the source of content for the rcvfrom ** table entry. */ void content_rcvid_init(const char *zSrc){ if( g.rcvid==0 ){ user_select(); if( g.zIpAddr ) zSrc = g.zIpAddr; db_multi_exec( "INSERT INTO rcvfrom(uid, mtime, nonce, ipaddr)" "VALUES(%d, julianday('now'), %Q, %Q)", g.userUid, g.zNonce, zSrc ); g.rcvid = db_last_insert_rowid(); } } /* ** Write content into the database. Return the record ID. If the ** content is already in the database, just return the record ID. ** ** If srcId is specified, then pBlob is delta content from ** the srcId record. srcId might be a phantom. |
︙ | ︙ | |||
528 529 530 531 532 533 534 | }else{ rid = 0; /* No entry with the same UUID currently exists */ markAsUnclustered = 1; } db_finalize(&s1); /* Construct a received-from ID if we do not already have one */ | | < < < < < < < | 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 | }else{ rid = 0; /* No entry with the same UUID currently exists */ markAsUnclustered = 1; } db_finalize(&s1); /* Construct a received-from ID if we do not already have one */ content_rcvid_init(0); if( nBlob ){ cmpr = pBlob[0]; }else{ blob_compress(pBlob, &cmpr); } if( rid>0 ){ |
︙ | ︙ | |||
839 840 841 842 843 844 845 | if( strncmp(z, "-----BEGIN PGP SIGNED MESSAGE-----", 34)==0 ) return 1; if( z[0]<'A' || z[0]>'Z' || z[1]!=' ' || z[0]=='I' ) return 0; if( z[n-1]!='\n' ) return 0; return 1; } /* | | | 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 | if( strncmp(z, "-----BEGIN PGP SIGNED MESSAGE-----", 34)==0 ) return 1; if( z[0]<'A' || z[0]>'Z' || z[1]!=' ' || z[0]=='I' ) return 0; if( z[n-1]!='\n' ) return 0; return 1; } /* ** COMMAND: test-integrity ** ** Verify that all content can be extracted from the BLOB table correctly. ** If the BLOB table is correct, then the repository can always be ** successfully reconstructed using "fossil rebuild". ** ** Options: ** |
︙ | ︙ | |||
1161 1162 1163 1164 1165 1166 1167 | blob_reset(&x); if( c!='y' && c!='Y' ) return; db_find_and_open_repository(OPEN_ANY_SCHEMA, 0); db_begin_transaction(); db_prepare(&q, "SELECT rid FROM delta WHERE srcid=:rid"); for(i=2; i<g.argc; i++){ int rid = atoi(g.argv[i]); | | | 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 | blob_reset(&x); if( c!='y' && c!='Y' ) return; db_find_and_open_repository(OPEN_ANY_SCHEMA, 0); db_begin_transaction(); db_prepare(&q, "SELECT rid FROM delta WHERE srcid=:rid"); for(i=2; i<g.argc; i++){ int rid = atoi(g.argv[i]); fossil_print("Erasing artifact %d (%s)\n", rid, db_text("", "SELECT uuid FROM blob WHERE rid=%d", rid)); db_bind_int(&q, ":rid", rid); while( db_step(&q)==SQLITE_ROW ){ content_undelta(db_column_int(&q,0)); } db_reset(&q); db_multi_exec("DELETE FROM blob WHERE rid=%d", rid); db_multi_exec("DELETE FROM delta WHERE rid=%d", rid); } db_finalize(&q); db_end_transaction(0); } |
Changes to src/cson_amalgamation.c.
︙ | ︙ | |||
17 18 19 20 21 22 23 | # ifdef _MSC_VER # ifdef JSON_PARSER_DLL_EXPORTS # define JSON_PARSER_DLL_API __declspec(dllexport) # else # define JSON_PARSER_DLL_API __declspec(dllimport) # endif # else | | | | | | | | | | | | | | 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 | # ifdef _MSC_VER # ifdef JSON_PARSER_DLL_EXPORTS # define JSON_PARSER_DLL_API __declspec(dllexport) # else # define JSON_PARSER_DLL_API __declspec(dllimport) # endif # else # define JSON_PARSER_DLL_API # endif #else # define JSON_PARSER_DLL_API #endif /* Determine the integer type use to parse non-floating point numbers */ #ifdef _WIN32 typedef __int64 JSON_int_t; #define JSON_PARSER_INTEGER_SSCANF_TOKEN "%I64d" #define JSON_PARSER_INTEGER_SPRINTF_TOKEN "%I64d" #elif (__STDC_VERSION__ >= 199901L) || (HAVE_LONG_LONG == 1) typedef long long JSON_int_t; #define JSON_PARSER_INTEGER_SSCANF_TOKEN "%lld" #define JSON_PARSER_INTEGER_SPRINTF_TOKEN "%lld" #else typedef long JSON_int_t; #define JSON_PARSER_INTEGER_SSCANF_TOKEN "%ld" #define JSON_PARSER_INTEGER_SPRINTF_TOKEN "%ld" #endif #ifdef __cplusplus extern "C" { #endif typedef enum { JSON_E_NONE = 0, JSON_E_INVALID_CHAR, JSON_E_INVALID_KEYWORD, JSON_E_INVALID_ESCAPE_SEQUENCE, JSON_E_INVALID_UNICODE_SEQUENCE, JSON_E_INVALID_NUMBER, JSON_E_NESTING_DEPTH_REACHED, JSON_E_UNBALANCED_COLLECTION, JSON_E_EXPECTED_KEY, JSON_E_EXPECTED_COLON, JSON_E_OUT_OF_MEMORY } JSON_error; typedef enum { JSON_T_NONE = 0, JSON_T_ARRAY_BEGIN, JSON_T_ARRAY_END, JSON_T_OBJECT_BEGIN, JSON_T_OBJECT_END, JSON_T_INTEGER, JSON_T_FLOAT, JSON_T_NULL, JSON_T_TRUE, JSON_T_FALSE, JSON_T_STRING, JSON_T_KEY, JSON_T_MAX } JSON_type; typedef struct JSON_value_struct { union { JSON_int_t integer_value; double float_value; struct { const char* value; size_t length; } str; } vu; } JSON_value; typedef struct JSON_parser_struct* JSON_parser; /*! \brief JSON parser callback \param ctx The pointer passed to new_JSON_parser. \param type An element of JSON_type but not JSON_T_NONE. \param value A representation of the parsed value. This parameter is NULL for JSON_T_ARRAY_BEGIN, JSON_T_ARRAY_END, JSON_T_OBJECT_BEGIN, JSON_T_OBJECT_END, JSON_T_NULL, JSON_T_TRUE, and JSON_T_FALSE. String values are always returned as zero-terminated C strings. \return Non-zero if parsing should continue, else zero. */ typedef int (*JSON_parser_callback)(void* ctx, int type, const JSON_value* value); /** A typedef for allocator functions semantically compatible with malloc(). */ typedef void* (*JSON_malloc_t)(size_t n); /** A typedef for deallocator functions semantically compatible with free(). */ typedef void (*JSON_free_t)(void* mem); /*! \brief The structure used to configure a JSON parser object */ typedef struct { /** Pointer to a callback, called when the parser has something to tell the user. This parameter may be NULL. In this case the input is merely checked for validity. */ JSON_parser_callback callback; |
︙ | ︙ | |||
174 175 176 177 178 179 180 | - no comments - Uses realloc() for memory de/allocation. \param config. Used to configure the parser. */ JSON_PARSER_DLL_API void init_JSON_config(JSON_config * config); | | | | | | | | 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 | - no comments - Uses realloc() for memory de/allocation. \param config. Used to configure the parser. */ JSON_PARSER_DLL_API void init_JSON_config(JSON_config * config); /*! \brief Create a JSON parser object \param config. Used to configure the parser. Set to NULL to use the default configuration. See init_JSON_config. Its contents are copied by this function, so it need not outlive the returned object. \return The parser object, which is owned by the caller and must eventually be freed by calling delete_JSON_parser(). */ JSON_PARSER_DLL_API JSON_parser new_JSON_parser(JSON_config const* config); /*! \brief Destroy a previously created JSON parser object. */ JSON_PARSER_DLL_API void delete_JSON_parser(JSON_parser jc); /*! \brief Parse a character. \return Non-zero, if all characters passed to this function are part of are valid JSON. */ JSON_PARSER_DLL_API int JSON_parser_char(JSON_parser jc, int next_char); /*! \brief Finalize parsing. Call this method once after all input characters have been consumed. \return Non-zero, if all parsed characters are valid JSON, zero otherwise. */ JSON_PARSER_DLL_API int JSON_parser_done(JSON_parser jc); /*! \brief Determine if a given string is valid JSON white space \return Non-zero if the string is valid, zero otherwise. */ JSON_PARSER_DLL_API int JSON_parser_is_legal_white_space_string(const char* s); /*! \brief Gets the last error that occurred during the use of JSON_parser. \return A value from the JSON_error enum. */ JSON_PARSER_DLL_API int JSON_parser_get_last_error(JSON_parser jc); /*! \brief Re-sets the parser to prepare it for another parse run. \return True (non-zero) on success, 0 on error (e.g. !jc). */ JSON_PARSER_DLL_API int JSON_parser_reset(JSON_parser jc); #ifdef __cplusplus } #endif #endif /* JSON_PARSER_H */ /* end file parser/JSON_parser.h */ /* begin file parser/JSON_parser.c */ /* Copyright (c) 2007-2013 Jean Gressmann (jean@0x42.de) |
︙ | ︙ | |||
1429 1430 1431 1432 1433 1434 1435 | #endif #if defined(__cplusplus) extern "C" { #endif | | | 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 | #endif #if defined(__cplusplus) extern "C" { #endif /** This type holds the "vtbl" for type-specific operations when working with cson_value objects. All cson_values of a given logical type share a pointer to a single library-internal instance of this class. */ |
︙ | ︙ | |||
1581 1582 1583 1584 1585 1586 1587 | Assumes V is a (cson_value*) ans V->value is a (T*). Returns V->value cast to a (T*). */ #define CSON_CAST(T,V) ((T*)((V)->value)) /** Assumes V is a pointer to memory which is allocated as part of a cson_value instance (the bytes immediately after that part). | | | 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 | Assumes V is a (cson_value*) ans V->value is a (T*). Returns V->value cast to a (T*). */ #define CSON_CAST(T,V) ((T*)((V)->value)) /** Assumes V is a pointer to memory which is allocated as part of a cson_value instance (the bytes immediately after that part). Returns a pointer a cson_value by subtracting sizeof(cson_value) from that address and casting it to a (cson_value*) */ #define CSON_VCAST(V) ((cson_value *)(((unsigned char *)(V))-sizeof(cson_value))) /** CSON_INT(V) assumes that V is a (cson_value*) of type CSON_TYPE_INTEGER. This macro returns a (cson_int_t*) representing |
︙ | ︙ | |||
1605 1606 1607 1608 1609 1610 1611 | #define CSON_DBL(V) CSON_CAST(cson_double_t,(V)) #define CSON_STR(V) CSON_CAST(cson_string,(V)) #define CSON_OBJ(V) CSON_CAST(cson_object,(V)) #define CSON_ARRAY(V) CSON_CAST(cson_array,(V)) /** Holds special shared "constant" (though they are non-const) | | | | | | 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 | #define CSON_DBL(V) CSON_CAST(cson_double_t,(V)) #define CSON_STR(V) CSON_CAST(cson_string,(V)) #define CSON_OBJ(V) CSON_CAST(cson_object,(V)) #define CSON_ARRAY(V) CSON_CAST(cson_array,(V)) /** Holds special shared "constant" (though they are non-const) values. */ static struct CSON_EMPTY_HOLDER_ { char trueValue; cson_string stringValue; } CSON_EMPTY_HOLDER = { 1/*trueValue*/, cson_string_empty_m }; /** Indexes into the CSON_SPECIAL_VALUES array. If this enum changes in any way, makes damned sure that CSON_SPECIAL_VALUES is updated to match!!! */ enum CSON_INTERNAL_VALUES { CSON_VAL_UNDEF = 0, CSON_VAL_NULL = 1, CSON_VAL_TRUE = 2, CSON_VAL_FALSE = 3, CSON_VAL_INT_0 = 4, CSON_VAL_DBL_0 = 5, CSON_VAL_STR_EMPTY = 6, CSON_INTERNAL_VALUES_LENGTH }; /** Some "special" shared cson_value instances. These values MUST be initialized in the order specified by the CSON_INTERNAL_VALUES enum. Note that they are not const because they are used as shared-allocation objects in non-const contexts. However, the public API provides no way to modifying them, and clients who modify values directly are subject to The Wrath of Undefined Behaviour. */ static cson_value CSON_SPECIAL_VALUES[] = { |
︙ | ︙ | |||
1663 1664 1665 1666 1667 1668 1669 | }; /** Returns non-0 (true) if m is one of our special "built-in" values, e.g. from CSON_SPECIAL_VALUES and some "empty" values. | | | 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 | }; /** Returns non-0 (true) if m is one of our special "built-in" values, e.g. from CSON_SPECIAL_VALUES and some "empty" values. If this returns true, m MUST NOT be free()d! */ static char cson_value_is_builtin( void const * m ) { if((m >= (void const *)&CSON_EMPTY_HOLDER) && ( m < (void const *)(&CSON_EMPTY_HOLDER+1))) return 1; |
︙ | ︙ | |||
2181 2182 2183 2184 2185 2186 2187 | int (*visitor)(cson_kvp * obj, void * visitorState ), void * visitorState ); static int cson_value_list_visit( cson_value_list * self, int (*visitor)(cson_value * obj, void * visitorState ), void * visitorState ); #endif #endif | | | 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 | int (*visitor)(cson_kvp * obj, void * visitorState ), void * visitorState ); static int cson_value_list_visit( cson_value_list * self, int (*visitor)(cson_value * obj, void * visitorState ), void * visitorState ); #endif #endif #if 0 # define LIST_T cson_value_list # define VALUE_T cson_value * # define VALUE_T_IS_PTR 1 # define LIST_T cson_kvp_list # define VALUE_T cson_kvp * # define VALUE_T_IS_PTR 1 |
︙ | ︙ | |||
2360 2361 2362 2363 2364 2365 2366 | cson_value * cson_value_new_object() { return cson_value_object_alloc(); } cson_object * cson_new_object() { | | | 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 | cson_value * cson_value_new_object() { return cson_value_object_alloc(); } cson_object * cson_new_object() { return cson_value_get_object( cson_value_new_object() ); } cson_value * cson_value_new_array() { return cson_value_array_alloc(); } |
︙ | ︙ | |||
2606 2607 2608 2609 2610 2611 2612 | if( ! val || !val->api ) return cson_rc.ArgError; else { cson_int_t i = 0; int rc = 0; switch(val->api->typeID) { | | | 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 2619 2620 | if( ! val || !val->api ) return cson_rc.ArgError; else { cson_int_t i = 0; int rc = 0; switch(val->api->typeID) { case CSON_TYPE_UNDEF: case CSON_TYPE_NULL: i = 0; break; case CSON_TYPE_BOOL: { char b = 0; cson_value_fetch_bool( val, &b ); i = b; |
︙ | ︙ | |||
2659 2660 2661 2662 2663 2664 2665 | if( ! val || !val->api ) return cson_rc.ArgError; else { cson_double_t d = 0.0; int rc = 0; switch(val->api->typeID) { | | | 2659 2660 2661 2662 2663 2664 2665 2666 2667 2668 2669 2670 2671 2672 2673 | if( ! val || !val->api ) return cson_rc.ArgError; else { cson_double_t d = 0.0; int rc = 0; switch(val->api->typeID) { case CSON_TYPE_UNDEF: case CSON_TYPE_NULL: d = 0; break; case CSON_TYPE_BOOL: { char b = 0; cson_value_fetch_bool( val, &b ); d = b ? 1.0 : 0.0; |
︙ | ︙ | |||
2789 2790 2791 2792 2793 2794 2795 | } #if 0 /** Removes and returns the last value from the given array, shrinking its size by 1. Returns NULL if ar is NULL, ar->list.count is 0, or the element at that index is NULL. | | | 2789 2790 2791 2792 2793 2794 2795 2796 2797 2798 2799 2800 2801 2802 2803 | } #if 0 /** Removes and returns the last value from the given array, shrinking its size by 1. Returns NULL if ar is NULL, ar->list.count is 0, or the element at that index is NULL. If removeRef is true then cson_value_free() is called to remove ar's reference count for the value. In that case NULL is returned, even if the object still has live references. If removeRef is false then the caller takes over ownership of that reference count point. If removeRef is false then the caller takes over ownership |
︙ | ︙ | |||
2856 2857 2858 2859 2860 2861 2862 | { cson_value * c = cson_value_new(CSON_TYPE_INTEGER,0); #if !defined(NDEBUG) && CSON_VOID_PTR_IS_BIG assert( sizeof(cson_int_t) <= sizeof(void *) ); #endif if( c ) { | | | | 2856 2857 2858 2859 2860 2861 2862 2863 2864 2865 2866 2867 2868 2869 2870 2871 2872 2873 2874 2875 2876 2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 2888 2889 | { cson_value * c = cson_value_new(CSON_TYPE_INTEGER,0); #if !defined(NDEBUG) && CSON_VOID_PTR_IS_BIG assert( sizeof(cson_int_t) <= sizeof(void *) ); #endif if( c ) { memcpy(CSON_INT(c), &v, sizeof(v)); } return c; } } cson_value * cson_new_double( cson_double_t v ) { return cson_value_new_double(v); } cson_value * cson_value_new_double( cson_double_t v ) { if( 0.0 == v ) return &CSON_SPECIAL_VALUES[CSON_VAL_DBL_0]; else { cson_value * c = cson_value_new(CSON_TYPE_DOUBLE,0); if( c ) { memcpy(CSON_DBL(c), &v, sizeof(v)); } return c; } } cson_string * cson_new_string(char const * str, unsigned int len) { |
︙ | ︙ | |||
3066 3067 3068 3069 3070 3071 3072 | if( obj->kvp.count ) { qsort( obj->kvp.list, obj->kvp.count, sizeof(cson_kvp*), cson_kvp_cmp ); } } | | | 3066 3067 3068 3069 3070 3071 3072 3073 3074 3075 3076 3077 3078 3079 3080 | if( obj->kvp.count ) { qsort( obj->kvp.list, obj->kvp.count, sizeof(cson_kvp*), cson_kvp_cmp ); } } #endif int cson_object_unset( cson_object * obj, char const * key ) { if( ! obj || !key || !*key ) return cson_rc.ArgError; else { unsigned int ndx = 0; |
︙ | ︙ | |||
3236 3237 3238 3239 3240 3241 3242 | If p->node is-a Object then value is inserted into the object using p->key. In any other case cson_rc.InternalError is returned. Returns cson_rc.AllocError if an allocation fails. Returns 0 on success. On error, parsing must be ceased immediately. | | | 3236 3237 3238 3239 3240 3241 3242 3243 3244 3245 3246 3247 3248 3249 3250 | If p->node is-a Object then value is inserted into the object using p->key. In any other case cson_rc.InternalError is returned. Returns cson_rc.AllocError if an allocation fails. Returns 0 on success. On error, parsing must be ceased immediately. Ownership of val is ALWAYS TRANSFERED to this function. If this function fails, val will be cleaned up and destroyed. (This simplifies error handling in the core parser.) */ static int cson_parser_set_key( cson_parser * p, cson_value * val ) { assert( p && val ); |
︙ | ︙ | |||
3483 3484 3485 3486 3487 3488 3489 | break; } ++p->totalKeyCount; break; } case JSON_T_STRING: { cson_value * v = cson_value_new_string( value->vu.str.value, value->vu.str.length ); | | | 3483 3484 3485 3486 3487 3488 3489 3490 3491 3492 3493 3494 3495 3496 3497 | break; } ++p->totalKeyCount; break; } case JSON_T_STRING: { cson_value * v = cson_value_new_string( value->vu.str.value, value->vu.str.length ); rc = ( NULL == v ) ? cson_rc.AllocError : cson_parser_push_value( p, v ); break; } default: assert(0); rc = cson_rc.InternalError; |
︙ | ︙ | |||
3530 3531 3532 3533 3534 3535 3536 | Cleans up all contents of p but does not free p. To properly take over ownership of the parser's root node on a successful parse: - Copy p->root's pointer and set p->root to NULL. - Eventually free up p->root with cson_value_free(). | | | 3530 3531 3532 3533 3534 3535 3536 3537 3538 3539 3540 3541 3542 3543 3544 | Cleans up all contents of p but does not free p. To properly take over ownership of the parser's root node on a successful parse: - Copy p->root's pointer and set p->root to NULL. - Eventually free up p->root with cson_value_free(). If you do not set p->root to NULL, p->root will be freed along with any other items inserted into it (or under it) during the parsing process. */ static int cson_parser_clean( cson_parser * p ) { if( ! p ) return cson_rc.ArgError; |
︙ | ︙ | |||
3569 3570 3571 3572 3573 3574 3575 | unsigned char ch[2] = {0,0}; cson_parse_opt const opt = opt_ ? *opt_ : cson_parse_opt_empty; int rc = 0; unsigned int len = 1; cson_parse_info info = info_ ? *info_ : cson_parse_info_empty; cson_parser p = cson_parser_empty; if( ! tgt || ! src ) return cson_rc.ArgError; | | | 3569 3570 3571 3572 3573 3574 3575 3576 3577 3578 3579 3580 3581 3582 3583 | unsigned char ch[2] = {0,0}; cson_parse_opt const opt = opt_ ? *opt_ : cson_parse_opt_empty; int rc = 0; unsigned int len = 1; cson_parse_info info = info_ ? *info_ : cson_parse_info_empty; cson_parser p = cson_parser_empty; if( ! tgt || ! src ) return cson_rc.ArgError; { JSON_config jopt = {0}; init_JSON_config( &jopt ); jopt.allow_comments = opt.allowComments; jopt.depth = opt.maxDepth; jopt.callback_ctx = &p; jopt.handle_floats_manually = 0; |
︙ | ︙ | |||
4638 4639 4640 4641 4642 4643 4644 | #else rc = cson_value_clone(v); #endif #undef TRY_SHARING cson_value_add_reference(rc); return rc; } | | | 4638 4639 4640 4641 4642 4643 4644 4645 4646 4647 4648 4649 4650 4651 4652 | #else rc = cson_value_clone(v); #endif #undef TRY_SHARING cson_value_add_reference(rc); return rc; } static cson_value * cson_value_clone_array( cson_value const * orig ) { unsigned int i = 0; cson_array const * asrc = cson_value_get_array( orig ); unsigned int alen = cson_array_length_get( asrc ); cson_value * destV = NULL; cson_array * destA = NULL; |
︙ | ︙ | |||
4678 4679 4680 4681 4682 4683 4684 | return NULL; } cson_value_free(cl)/*remove our artificial reference */; } } return destV; } | | | 4678 4679 4680 4681 4682 4683 4684 4685 4686 4687 4688 4689 4690 4691 4692 | return NULL; } cson_value_free(cl)/*remove our artificial reference */; } } return destV; } static cson_value * cson_value_clone_object( cson_value const * orig ) { cson_object const * src = cson_value_get_object( orig ); cson_value * destV = NULL; cson_object * dest = NULL; cson_kvp const * kvp = NULL; cson_object_iterator iter = cson_object_iterator_empty; |
︙ | ︙ | |||
4832 4833 4834 4835 4836 4837 4838 | v = cson_strdup( "null", 4 ); break; } case CSON_TYPE_STRING: { cson_string const * jstr = cson_value_get_string(orig); unsigned const int slen = cson_string_length_bytes( jstr ); assert( NULL != jstr ); | | | 4832 4833 4834 4835 4836 4837 4838 4839 4840 4841 4842 4843 4844 4845 4846 | v = cson_strdup( "null", 4 ); break; } case CSON_TYPE_STRING: { cson_string const * jstr = cson_value_get_string(orig); unsigned const int slen = cson_string_length_bytes( jstr ); assert( NULL != jstr ); v = cson_strdup( cson_string_cstr( jstr ), slen ); break; } case CSON_TYPE_INTEGER: { char buf[BufSize] = {0}; if( 0 < sprintf( v, "%"CSON_INT_T_PFMT, cson_value_get_integer(orig)) ) { v = cson_strdup( buf, strlen(buf) ); |
︙ | ︙ | |||
4885 4886 4887 4888 4889 4890 4891 | v = cson_strdup( "null", 4 ); break; } case CSON_TYPE_STRING: { cson_string const * jstr = cson_value_get_string(orig); unsigned const int slen = cson_string_length_bytes( jstr ); assert( NULL != jstr ); | | | 4885 4886 4887 4888 4889 4890 4891 4892 4893 4894 4895 4896 4897 4898 4899 | v = cson_strdup( "null", 4 ); break; } case CSON_TYPE_STRING: { cson_string const * jstr = cson_value_get_string(orig); unsigned const int slen = cson_string_length_bytes( jstr ); assert( NULL != jstr ); v = cson_strdup( cson_string_cstr( jstr ), slen ); break; } case CSON_TYPE_INTEGER: { char buf[BufSize] = {0}; if( 0 < sprintf( v, "%"CSON_INT_T_PFMT, cson_value_get_integer(orig)) ) { v = cson_strdup( buf, strlen(buf) ); |
︙ | ︙ | |||
5349 5350 5351 5352 5353 5354 5355 | char const * colName = NULL; int i = 0; int rc = 0; int colCount = 0; assert(st); colCount = sqlite3_column_count(st); if( colCount <= 0 ) return NULL; | | | 5349 5350 5351 5352 5353 5354 5355 5356 5357 5358 5359 5360 5361 5362 5363 | char const * colName = NULL; int i = 0; int rc = 0; int colCount = 0; assert(st); colCount = sqlite3_column_count(st); if( colCount <= 0 ) return NULL; aryV = cson_value_new_array(); if( ! aryV ) return NULL; ary = cson_value_get_array(aryV); assert(ary); for( i = 0; (0 ==rc) && (i < colCount); ++i ) { colName = sqlite3_column_name( st, i ); |
︙ | ︙ | |||
5489 5490 5491 5492 5493 5494 5495 | error: cson_value_free(aryV); aryV = NULL; end: return aryV; } | | | 5489 5490 5491 5492 5493 5494 5495 5496 5497 5498 5499 5500 5501 5502 5503 | error: cson_value_free(aryV); aryV = NULL; end: return aryV; } /** Internal impl of cson_sqlite3_stmt_to_json() when the 'fat' parameter is non-0. */ static int cson_sqlite3_stmt_to_json_fat( sqlite3_stmt * st, cson_value ** tgt ) { #define RETURN(RC) { if(rootV) cson_value_free(rootV); return RC; } |
︙ | ︙ | |||
5634 5635 5636 5637 5638 5639 5640 | { sqlite3_stmt * st = NULL; int rc = sqlite3_prepare_v2( db, sql, -1, &st, NULL ); if( 0 != rc ) return cson_rc.IOError /* FIXME: Better error code? */; rc = cson_sqlite3_stmt_to_json( st, tgt, fat ); sqlite3_finalize( st ); return rc; | | | 5634 5635 5636 5637 5638 5639 5640 5641 5642 5643 5644 5645 5646 5647 5648 | { sqlite3_stmt * st = NULL; int rc = sqlite3_prepare_v2( db, sql, -1, &st, NULL ); if( 0 != rc ) return cson_rc.IOError /* FIXME: Better error code? */; rc = cson_sqlite3_stmt_to_json( st, tgt, fat ); sqlite3_finalize( st ); return rc; } } int cson_sqlite3_bind_value( sqlite3_stmt * st, int ndx, cson_value const * v ) { int rc = 0; char convertErr = 0; if(!st) return cson_rc.ArgError; |
︙ | ︙ |
Changes to src/db.c.
︙ | ︙ | |||
25 26 27 28 29 30 31 | ** (2) The "repository" database ** ** (3) A local checkout database named "_FOSSIL_" or ".fslckout" ** and located at the root of the local copy of the source tree. ** */ #include "config.h" | | > > > > | 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 | ** (2) The "repository" database ** ** (3) A local checkout database named "_FOSSIL_" or ".fslckout" ** and located at the root of the local copy of the source tree. ** */ #include "config.h" #if defined(_WIN32) # if USE_SEE # include <windows.h> # endif #else # include <pwd.h> #endif #include <sqlite3.h> #include <sys/types.h> #include <sys/stat.h> #include <unistd.h> #include <time.h> |
︙ | ︙ | |||
868 869 870 871 872 873 874 875 876 | db_now_function, 0, 0); sqlite3_create_function(db, "toLocal", 0, SQLITE_UTF8, 0, db_tolocal_function, 0, 0); sqlite3_create_function(db, "fromLocal", 0, SQLITE_UTF8, 0, db_fromlocal_function, 0, 0); } /* ** If the database file zDbFile has a name that suggests that it is | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | | > | < | | | | | 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 | db_now_function, 0, 0); sqlite3_create_function(db, "toLocal", 0, SQLITE_UTF8, 0, db_tolocal_function, 0, 0); sqlite3_create_function(db, "fromLocal", 0, SQLITE_UTF8, 0, db_fromlocal_function, 0, 0); } #if USE_SEE /* ** This is a pointer to the saved database encryption key string. */ static char *zSavedKey = 0; /* ** This is the size of the saved database encryption key, in bytes. */ size_t savedKeySize = 0; /* ** This function returns the saved database encryption key -OR- zero if ** no database encryption key is saved. */ char *db_get_saved_encryption_key(){ return zSavedKey; } /* ** This function returns the size of the saved database encryption key ** -OR- zero if no database encryption key is saved. */ size_t db_get_saved_encryption_key_size(){ return savedKeySize; } /* ** This function arranges for the database encryption key to be securely ** saved in non-pagable memory (on platforms where this is possible). */ static void db_save_encryption_key( Blob *pKey ){ void *p = NULL; size_t n = 0; size_t pageSize = 0; size_t blobSize = 0; blobSize = blob_size(pKey); if( blobSize==0 ) return; fossil_get_page_size(&pageSize); assert( pageSize>0 ); if( blobSize>pageSize ){ fossil_fatal("key blob too large: %u versus %u", blobSize, pageSize); } p = fossil_secure_alloc_page(&n); assert( p!=NULL ); assert( n==pageSize ); assert( n>=blobSize ); memcpy(p, blob_str(pKey), blobSize); zSavedKey = p; savedKeySize = n; } /* ** This function arranges for the saved database encryption key to be ** securely zeroed, unlocked (if necessary), and freed. */ void db_unsave_encryption_key(){ fossil_secure_free_page(zSavedKey, savedKeySize); zSavedKey = NULL; savedKeySize = 0; } /* ** This function sets the saved database encryption key to the specified ** string value, allocating or freeing the underlying memory if needed. */ void db_set_saved_encryption_key( Blob *pKey ){ if( zSavedKey!=NULL ){ size_t blobSize = blob_size(pKey); if( blobSize==0 ){ db_unsave_encryption_key(); }else{ if( blobSize>savedKeySize ){ fossil_fatal("key blob too large: %u versus %u", blobSize, savedKeySize); } fossil_secure_zero(zSavedKey, savedKeySize); memcpy(zSavedKey, blob_str(pKey), blobSize); } }else{ db_save_encryption_key(pKey); } } #if defined(_WIN32) /* ** This function sets the saved database encryption key to one that gets ** read from the specified Fossil parent process. This is only necessary ** (or functional) on Windows. */ void db_read_saved_encryption_key_from_process( DWORD processId, /* Identifier for Fossil parent process. */ LPVOID pAddress, /* Pointer to saved key buffer in the parent process. */ SIZE_T nSize /* Size of saved key buffer in the parent process. */ ){ void *p = NULL; size_t n = 0; size_t pageSize = 0; HANDLE hProcess = NULL; fossil_get_page_size(&pageSize); assert( pageSize>0 ); if( nSize>pageSize ){ fossil_fatal("key too large: %u versus %u", nSize, pageSize); } p = fossil_secure_alloc_page(&n); assert( p!=NULL ); assert( n==pageSize ); assert( n>=nSize ); hProcess = OpenProcess(PROCESS_VM_READ, FALSE, processId); if( hProcess!=NULL ){ SIZE_T nRead = 0; if( ReadProcessMemory(hProcess, pAddress, p, nSize, &nRead) ){ CloseHandle(hProcess); if( nRead==nSize ){ db_unsave_encryption_key(); zSavedKey = p; savedKeySize = n; }else{ fossil_fatal("bad size read, %u out of %u bytes at %p from pid %lu", nRead, nSize, pAddress, processId); } }else{ CloseHandle(hProcess); fossil_fatal("failed read, %u bytes at %p from pid %lu: %lu", nSize, pAddress, processId, GetLastError()); } }else{ fossil_fatal("failed to open pid %lu: %lu", processId, GetLastError()); } } #endif /* defined(_WIN32) */ #endif /* USE_SEE */ /* ** If the database file zDbFile has a name that suggests that it is ** encrypted, then prompt for the database encryption key and return it ** in the blob *pKey. Or, if the encryption key has previously been ** requested, just return a copy of the previous result. The blob in ** *pKey must be initialized. */ static void db_maybe_obtain_encryption_key( const char *zDbFile, /* Name of the database file */ Blob *pKey /* Put the encryption key here */ ){ #if USE_SEE if( sqlite3_strglob("*.efossil", zDbFile)==0 ){ char *zKey = db_get_saved_encryption_key(); if( zKey ){ blob_set(pKey, zKey); }else{ char *zPrompt = mprintf("\rencryption key for '%s': ", zDbFile); prompt_for_password(zPrompt, pKey, 0); fossil_free(zPrompt); db_set_saved_encryption_key(pKey); } } #endif } /* |
︙ | ︙ | |||
913 914 915 916 917 918 919 | zDbName, &db, SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE, g.zVfsName ); if( rc!=SQLITE_OK ){ db_err("[%s]: %s", zDbName, sqlite3_errmsg(db)); } | > | > | > > | | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | < < < < < | < < < | > | < < | < > | | 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 | zDbName, &db, SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE, g.zVfsName ); if( rc!=SQLITE_OK ){ db_err("[%s]: %s", zDbName, sqlite3_errmsg(db)); } blob_init(&key, 0, 0); db_maybe_obtain_encryption_key(zDbName, &key); if( blob_size(&key)>0 ){ char *zCmd = sqlite3_mprintf("PRAGMA key(%Q)", blob_str(&key)); sqlite3_exec(db, zCmd, 0, 0, 0); fossil_secure_zero(zCmd, strlen(zCmd)); sqlite3_free(zCmd); } blob_reset(&key); sqlite3_busy_timeout(db, 5000); sqlite3_wal_autocheckpoint(db, 1); /* Set to checkpoint frequently */ sqlite3_create_function(db, "user", 0, SQLITE_UTF8, 0, db_sql_user, 0, 0); sqlite3_create_function(db, "cgi", 1, SQLITE_UTF8, 0, db_sql_cgi, 0, 0); sqlite3_create_function(db, "cgi", 2, SQLITE_UTF8, 0, db_sql_cgi, 0, 0); sqlite3_create_function(db, "print", -1, SQLITE_UTF8, 0,db_sql_print,0,0); sqlite3_create_function( db, "is_selected", 1, SQLITE_UTF8, 0, file_is_selected,0,0 ); sqlite3_create_function( db, "if_selected", 3, SQLITE_UTF8, 0, file_is_selected,0,0 ); if( g.fSqlTrace ) sqlite3_trace_v2(db, SQLITE_TRACE_STMT, db_sql_trace, 0); db_add_aux_functions(db); re_add_sql_func(db); /* The REGEXP operator */ foci_register(db); /* The "files_of_checkin" virtual table */ sqlite3_exec(db, "PRAGMA foreign_keys=OFF;", 0, 0, 0); return db; } /* ** Detaches the zLabel database. */ void db_detach(const char *zLabel){ db_multi_exec("DETACH DATABASE %Q", zLabel); } /* ** zDbName is the name of a database file. Attach zDbName using ** the name zLabel. */ void db_attach(const char *zDbName, const char *zLabel){ char *zCmd; Blob key; blob_init(&key, 0, 0); db_maybe_obtain_encryption_key(zDbName, &key); zCmd = sqlite3_mprintf("ATTACH DATABASE %Q AS %Q KEY %Q", zDbName, zLabel, blob_str(&key)); db_multi_exec(zCmd /*works-like:""*/); fossil_secure_zero(zCmd, strlen(zCmd)); sqlite3_free(zCmd); blob_reset(&key); } /* ** Change the schema name of the "main" database to zLabel. ** zLabel must be a static string that is unchanged for the life of ** the database connection. ** ** After calling this routine, db_database_slot(zLabel) should ** return 0. */ void db_set_main_schemaname(sqlite3 *db, const char *zLabel){ if( sqlite3_db_config(db, SQLITE_DBCONFIG_MAINDBNAME, zLabel) ){ fossil_fatal("Fossil requires a version of SQLite that supports the " "SQLITE_DBCONFIG_MAINDBNAME interface."); } } /* ** Return the slot number for database zLabel. The first database ** opened is slot 0. The "temp" database is slot 1. Attached databases ** are slots 2 and higher. ** ** Return -1 if zLabel does not match any open database. */ int db_database_slot(const char *zLabel){ int iSlot = -1; Stmt q; if( g.db==0 ) return iSlot; db_prepare(&q, "PRAGMA database_list"); while( db_step(&q)==SQLITE_ROW ){ if( fossil_strcmp(db_column_text(&q,1),zLabel)==0 ){ iSlot = db_column_int(&q, 0); break; } } db_finalize(&q); return iSlot; } /* ** zDbName is the name of a database file. If no other database ** file is open, then open this one. If another database file is ** already open, then attach zDbName using the name zLabel. */ void db_open_or_attach(const char *zDbName, const char *zLabel){ if( !g.db ){ g.db = db_open(zDbName); db_set_main_schemaname(g.db, zLabel); }else{ db_attach(zDbName, zLabel); } } /* ** Close the per-user database file in ~/.fossil */ void db_close_config(){ int iSlot = db_database_slot("configdb"); if( iSlot>0 ){ db_detach("configdb"); g.zConfigDbName = 0; }else if( g.dbConfig ){ sqlite3_wal_checkpoint(g.dbConfig, 0); sqlite3_close(g.dbConfig); g.dbConfig = 0; g.zConfigDbName = 0; }else if( g.db && 0==iSlot ){ sqlite3_wal_checkpoint(g.db, 0); sqlite3_close(g.db); g.db = 0; g.zConfigDbName = 0; } } /* ** Open the user database in "~/.fossil". Create the database anew if ** it does not already exist. ** ** If the useAttach flag is 0 (the usual case) then the user database is ** opened on a separate database connection g.dbConfig. This prevents ** the ~/.fossil database from becoming locked on long check-in or sync ** operations which hold an exclusive transaction. In a few cases, though, ** it is convenient for the ~/.fossil to be attached to the main database ** connection so that we can join between the various databases. In that ** case, invoke this routine with useAttach as 1. */ int db_open_config(int useAttach, int isOptional){ char *zDbName; char *zHome; if( g.zConfigDbName ){ int alreadyAttached = db_database_slot("configdb")>0; if( useAttach==alreadyAttached ) return 1; /* Already open. */ db_close_config(); } zHome = fossil_getenv("FOSSIL_HOME"); #if defined(_WIN32) || defined(__CYGWIN__) if( zHome==0 ){ zHome = fossil_getenv("LOCALAPPDATA"); if( zHome==0 ){ |
︙ | ︙ | |||
1075 1076 1077 1078 1079 1080 1081 | db_init_database(zDbName, zConfigSchema, (char*)0); } if( file_access(zDbName, W_OK) ){ if( isOptional ) return 0; fossil_fatal("configuration file %s must be writeable", zDbName); } if( useAttach ){ | | < < | | < | | < | | | | | | | 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 | db_init_database(zDbName, zConfigSchema, (char*)0); } if( file_access(zDbName, W_OK) ){ if( isOptional ) return 0; fossil_fatal("configuration file %s must be writeable", zDbName); } if( useAttach ){ db_open_or_attach(zDbName, "configdb"); g.dbConfig = 0; }else{ g.dbConfig = db_open(zDbName); db_set_main_schemaname(g.dbConfig, "configdb"); } g.zConfigDbName = zDbName; return 1; } /* ** Return TRUE if zTable exists. */ int db_table_exists( const char *zDb, /* One of: NULL, "configdb", "localdb", "repository" */ const char *zTable /* Name of table */ ){ return sqlite3_table_column_metadata(g.db, zDb, zTable, 0, 0, 0, 0, 0, 0)==SQLITE_OK; } /* ** Return TRUE if zTable exists and contains column zColumn. ** Return FALSE if zTable does not exist or if zTable exists ** but lacks zColumn. */ int db_table_has_column( const char *zDb, /* One of: NULL, "config", "localdb", "repository" */ const char *zTable, /* Name of table */ const char *zColumn /* Name of column in table */ ){ return sqlite3_table_column_metadata(g.db, zDb, zTable, zColumn, 0, 0, 0, 0, 0)==SQLITE_OK; } /* ** Returns TRUE if zTable exists in the local database but lacks column ** zColumn */ static int db_local_table_exists_but_lacks_column( const char *zTable, const char *zColumn ){ return db_table_exists("localdb", zTable) && !db_table_has_column("localdb", zTable, zColumn); } /* ** If zDbName is a valid local database file, open it and return ** true. If it is not a valid local database file, return 0. */ static int isValidLocalDb(const char *zDbName){ i64 lsize; char *zVFileDef; if( file_access(zDbName, F_OK) ) return 0; lsize = file_size(zDbName); if( lsize%1024!=0 || lsize<4096 ) return 0; db_open_or_attach(zDbName, "localdb"); zVFileDef = db_text(0, "SELECT sql FROM localdb.sqlite_master" " WHERE name=='vfile'"); if( zVFileDef==0 ) return 0; /* If the "isexe" column is missing from the vfile table, then ** add it now. This code added on 2010-03-06. After all users have ** upgraded, this code can be safely deleted. */ if( sqlite3_strglob("* isexe *", zVFileDef)!=0 ){ |
︙ | ︙ | |||
1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 | if( db_local_table_exists_but_lacks_column("undo", "isLink") ){ db_multi_exec("ALTER TABLE undo ADD COLUMN isLink BOOLEAN DEFAULT 0"); } if( db_local_table_exists_but_lacks_column("undo_vfile", "islink") ){ db_multi_exec("ALTER TABLE undo_vfile ADD COLUMN islink BOOL DEFAULT 0"); } } return 1; } /* ** Locate the root directory of the local repository tree. The root ** directory is found by searching for a file named "_FOSSIL_" or ".fslckout" ** that contains a valid repository database. | > | 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 | if( db_local_table_exists_but_lacks_column("undo", "isLink") ){ db_multi_exec("ALTER TABLE undo ADD COLUMN isLink BOOLEAN DEFAULT 0"); } if( db_local_table_exists_but_lacks_column("undo_vfile", "islink") ){ db_multi_exec("ALTER TABLE undo_vfile ADD COLUMN islink BOOL DEFAULT 0"); } } fossil_free(zVFileDef); return 1; } /* ** Locate the root directory of the local repository tree. The root ** directory is found by searching for a file named "_FOSSIL_" or ".fslckout" ** that contains a valid repository database. |
︙ | ︙ | |||
1288 1289 1290 1291 1292 1293 1294 | #ifdef FOSSIL_ENABLE_JSON g.json.resultCode = FSL_JSON_E_DB_NOT_VALID; #endif fossil_panic("not a valid repository: %s", zDbName); } } g.zRepositoryName = mprintf("%s", zDbName); | | | | | < | 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 | #ifdef FOSSIL_ENABLE_JSON g.json.resultCode = FSL_JSON_E_DB_NOT_VALID; #endif fossil_panic("not a valid repository: %s", zDbName); } } g.zRepositoryName = mprintf("%s", zDbName); db_open_or_attach(g.zRepositoryName, "repository"); g.repositoryOpen = 1; /* Cache "allow-symlinks" option, because we'll need it on every stat call */ g.allowSymlinks = db_get_boolean("allow-symlinks", db_allow_symlinks_by_default()); g.zAuxSchema = db_get("aux-schema",""); /* Verify that the PLINK table has a new column added by the ** 2014-11-28 schema change. Create it if necessary. This code ** can be removed in the future, once all users have upgraded to the ** 2014-11-28 or later schema. */ if( !db_table_has_column("repository","plink","baseid") ){ db_multi_exec( "ALTER TABLE repository.plink ADD COLUMN baseid;" ); } /* Verify that the MLINK table has the newer columns added by the ** 2015-01-24 schema change. Create them if necessary. This code ** can be removed in the future, once all users have upgraded to the ** 2015-01-24 or later schema. */ if( !db_table_has_column("repository","mlink","isaux") ){ db_begin_transaction(); db_multi_exec( "ALTER TABLE repository.mlink ADD COLUMN pmid INTEGER DEFAULT 0;" "ALTER TABLE repository.mlink ADD COLUMN isaux BOOLEAN DEFAULT 0;" ); db_end_transaction(0); } } /* ** Flags for the db_find_and_open_repository() function. |
︙ | ︙ | |||
1372 1373 1374 1375 1376 1377 1378 | fossil_fatal("use --repository or -R to specify the repository database"); }else{ fossil_fatal("specify the repository name as a command-line argument"); } } } | < < < < < < < < < < < < | | 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 | fossil_fatal("use --repository or -R to specify the repository database"); }else{ fossil_fatal("specify the repository name as a command-line argument"); } } } /* ** Return TRUE if the schema is out-of-date */ int db_schema_is_outofdate(void){ return strcmp(g.zAuxSchema,AUX_SCHEMA_MIN)<0 || strcmp(g.zAuxSchema,AUX_SCHEMA_MAX)>0; } /* ** Return true if the database is writeable */ int db_is_writeable(const char *zName){ return g.db!=0 && !sqlite3_db_readonly(g.db, zName); } /* ** Verify that the repository schema is correct. If it is not correct, ** issue a fatal error and die. */ void db_verify_schema(void){ |
︙ | ︙ | |||
1441 1442 1443 1444 1445 1446 1447 | if( file_access(zRepo, F_OK) ){ fossil_fatal("no such file: %s", zRepo); } if( db_open_local(zRepo)==0 ){ fossil_fatal("not in a local checkout"); return; } | | | 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 | if( file_access(zRepo, F_OK) ){ fossil_fatal("no such file: %s", zRepo); } if( db_open_local(zRepo)==0 ){ fossil_fatal("not in a local checkout"); return; } db_open_or_attach(zRepo, "test_repo"); db_lset("repository", blob_str(&repo)); db_record_repository_filename(blob_str(&repo)); db_close(1); } /* |
︙ | ︙ | |||
1501 1502 1503 1504 1505 1506 1507 | while( db.pAllStmt ){ db_finalize(db.pAllStmt); } db_end_transaction(1); pStmt = 0; db_close_config(); | | | | | | | < < < | 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 | while( db.pAllStmt ){ db_finalize(db.pAllStmt); } db_end_transaction(1); pStmt = 0; db_close_config(); /* If the localdb has a lot of unused free space, ** then VACUUM it as we shut down. */ if( db_database_slot("localdb")>=0 ){ int nFree = db_int(0, "PRAGMA localdb.freelist_count"); int nTotal = db_int(0, "PRAGMA localdb.page_count"); if( nFree>nTotal/4 ){ db_multi_exec("VACUUM localdb;"); } } if( g.db ){ int rc; sqlite3_wal_checkpoint(g.db, 0); rc = sqlite3_close(g.db); if( rc==SQLITE_BUSY && reportErrors ){ while( (pStmt = sqlite3_next_stmt(g.db, pStmt))!=0 ){ fossil_warning("unfinalized SQL statement: [%s]", sqlite3_sql(pStmt)); } } g.db = 0; } g.repositoryOpen = 0; g.localOpen = 0; assert( g.dbConfig==0 ); assert( g.zConfigDbName==0 ); } /* ** Create a new empty repository database with the given name. ** ** Only the schema is initialized. The required VAR tables entries |
︙ | ︙ | |||
1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 | ** default users "anonymous", "nobody", "reader", "developer", and their ** associated permissions will be copied. ** ** Options: ** --template FILE copy settings from repository file ** --admin-user|-A USERNAME select given USERNAME as admin user ** --date-override DATETIME use DATETIME as time of the initial check-in ** ** See also: clone */ void create_repository_cmd(void){ char *zPassword; const char *zTemplate; /* Repository from which to copy settings */ const char *zDate; /* Date of the initial check-in */ | > > > > > > | 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 | ** default users "anonymous", "nobody", "reader", "developer", and their ** associated permissions will be copied. ** ** Options: ** --template FILE copy settings from repository file ** --admin-user|-A USERNAME select given USERNAME as admin user ** --date-override DATETIME use DATETIME as time of the initial check-in ** ** DATETIME may be "now" or "YYYY-MM-DDTHH:MM:SS.SSS". If in ** year-month-day form, it may be truncated, the "T" may be replaced by ** a space, and it may also name a timezone offset from UTC as "-HH:MM" ** (westward) or "+HH:MM" (eastward). Either no timezone suffix or "Z" ** means UTC. ** ** See also: clone */ void create_repository_cmd(void){ char *zPassword; const char *zTemplate; /* Repository from which to copy settings */ const char *zDate; /* Date of the initial check-in */ |
︙ | ︙ | |||
1806 1807 1808 1809 1810 1811 1812 | if( g.fSqlPrint ){ for(i=0; i<argc; i++){ char c = i==argc-1 ? '\n' : ' '; fossil_print("%s%c", sqlite3_value_text(argv[i]), c); } } } | > > | > > > > | > > | 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 | if( g.fSqlPrint ){ for(i=0; i<argc; i++){ char c = i==argc-1 ? '\n' : ' '; fossil_print("%s%c", sqlite3_value_text(argv[i]), c); } } } LOCAL int db_sql_trace(unsigned m, void *notUsed, void *pP, void *pX){ sqlite3_stmt *pStmt = (sqlite3_stmt*)pP; char *zSql; int n; const char *zArg = (const char*)pX; if( zArg[0]=='-' ) return 0; zSql = sqlite3_expanded_sql(pStmt); n = (int)strlen(zSql); fossil_trace("%s%s\n", zSql, (n>0 && zSql[n-1]==';') ? "" : ";"); sqlite3_free(zSql); return 0; } /* ** Implement the user() SQL function. user() takes no arguments and ** returns the user ID of the current user. */ LOCAL void db_sql_user( |
︙ | ︙ | |||
1992 1993 1994 1995 1996 1997 1998 | ** same constraint also holds true when restoring the previously swapped ** database connection; otherwise, it means that no swap was performed ** because the main database connection was already pointing to the config ** database. */ if( g.dbConfig ){ sqlite3 *dbTemp = g.db; | < < < | 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 | ** same constraint also holds true when restoring the previously swapped ** database connection; otherwise, it means that no swap was performed ** because the main database connection was already pointing to the config ** database. */ if( g.dbConfig ){ sqlite3 *dbTemp = g.db; g.db = g.dbConfig; g.dbConfig = dbTemp; } } /* ** Try to read a versioned setting string from .fossil-settings/<name>. ** ** Return the text of the string if it is found. Return NULL if not |
︙ | ︙ | |||
2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 | } int db_lget_int(const char *zName, int dflt){ return db_int(dflt, "SELECT value FROM vvar WHERE name=%Q", zName); } void db_lset_int(const char *zName, int value){ db_multi_exec("REPLACE INTO vvar(name,value) VALUES(%Q,%d)", zName, value); } /* ** Record the name of a local repository in the global_config() database. ** The repository filename %s is recorded as an entry with a "name" field ** of the following form: ** ** repo:%s | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 | } int db_lget_int(const char *zName, int dflt){ return db_int(dflt, "SELECT value FROM vvar WHERE name=%Q", zName); } void db_lset_int(const char *zName, int value){ db_multi_exec("REPLACE INTO vvar(name,value) VALUES(%Q,%d)", zName, value); } #if INTERFACE /* Manifest generation flags */ #define MFESTFLG_RAW 0x01 #define MFESTFLG_UUID 0x02 #define MFESTFLG_TAGS 0x04 #endif /* INTERFACE */ /* ** Get the manifest setting. For backwards compatibility first check if the ** value is a boolean. If it's not a boolean, treat each character as a flag ** to enable a manifest type. This system puts certain boundary conditions on ** which letters can be used to represent flags (any permutation of flags must ** not be able to fully form one of the boolean values). */ int db_get_manifest_setting(void){ int flg; char *zVal = db_get("manifest", 0); if( zVal==0 || is_false(zVal) ){ return 0; }else if( is_truth(zVal) ){ return MFESTFLG_RAW|MFESTFLG_UUID; } flg = 0; while( *zVal ){ switch( *zVal ){ case 'r': flg |= MFESTFLG_RAW; break; case 'u': flg |= MFESTFLG_UUID; break; case 't': flg |= MFESTFLG_TAGS; break; } zVal++; } return flg; } /* ** Record the name of a local repository in the global_config() database. ** The repository filename %s is recorded as an entry with a "name" field ** of the following form: ** ** repo:%s |
︙ | ︙ | |||
2550 2551 2552 2553 2554 2555 2556 | { "hash-digits", 0, 5, 0, 0, "10" }, { "http-port", 0, 16, 0, 0, "8080" }, { "https-login", 0, 0, 0, 0, "off" }, { "ignore-glob", 0, 40, 1, 0, "" }, { "keep-glob", 0, 40, 1, 0, "" }, { "localauth", 0, 0, 0, 0, "off" }, { "main-branch", 0, 40, 0, 0, "trunk" }, | | | 2755 2756 2757 2758 2759 2760 2761 2762 2763 2764 2765 2766 2767 2768 2769 | { "hash-digits", 0, 5, 0, 0, "10" }, { "http-port", 0, 16, 0, 0, "8080" }, { "https-login", 0, 0, 0, 0, "off" }, { "ignore-glob", 0, 40, 1, 0, "" }, { "keep-glob", 0, 40, 1, 0, "" }, { "localauth", 0, 0, 0, 0, "off" }, { "main-branch", 0, 40, 0, 0, "trunk" }, { "manifest", 0, 5, 1, 0, "off" }, { "max-loadavg", 0, 25, 0, 0, "0.0" }, { "max-upload", 0, 25, 0, 0, "250000" }, { "mtime-changes", 0, 0, 0, 0, "on" }, #if FOSSIL_ENABLE_LEGACY_MV_RM { "mv-rm-files", 0, 0, 0, 0, "off" }, #endif { "pgp-command", 0, 40, 0, 0, "gpg --clearsign -o " }, |
︙ | ︙ | |||
2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 | { "th1-docs", 0, 0, 0, 0, "off" }, #endif #ifdef FOSSIL_ENABLE_TH1_HOOKS { "th1-hooks", 0, 0, 0, 0, "off" }, #endif { "th1-setup", 0, 40, 1, 1, "" }, { "th1-uri-regexp", 0, 40, 1, 0, "" }, { "web-browser", 0, 32, 0, 0, "" }, { 0,0,0,0,0,0 } }; /* ** Look up a control setting by its name. Return a pointer to the Setting ** object, or NULL if there is no such setting. ** ** If allowPrefix is true, then the Setting returned is the first one for ** which zName is a prefix of the Setting name. */ const Setting *db_find_setting(const char *zName, int allowPrefix){ int lwr, mid, upr, c; int n = (int)strlen(zName) + !allowPrefix; lwr = 0; | > | | 2782 2783 2784 2785 2786 2787 2788 2789 2790 2791 2792 2793 2794 2795 2796 2797 2798 2799 2800 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 2812 | { "th1-docs", 0, 0, 0, 0, "off" }, #endif #ifdef FOSSIL_ENABLE_TH1_HOOKS { "th1-hooks", 0, 0, 0, 0, "off" }, #endif { "th1-setup", 0, 40, 1, 1, "" }, { "th1-uri-regexp", 0, 40, 1, 0, "" }, { "uv-sync", 0, 0, 0, 0, "off" }, { "web-browser", 0, 32, 0, 0, "" }, { 0,0,0,0,0,0 } }; /* ** Look up a control setting by its name. Return a pointer to the Setting ** object, or NULL if there is no such setting. ** ** If allowPrefix is true, then the Setting returned is the first one for ** which zName is a prefix of the Setting name. */ const Setting *db_find_setting(const char *zName, int allowPrefix){ int lwr, mid, upr, c; int n = (int)strlen(zName) + !allowPrefix; lwr = 0; upr = count(aSetting)-2; while( upr>=lwr ){ mid = (upr+lwr)/2; c = fossil_strncmp(zName, aSetting[mid].name, n); if( c<0 ){ upr = mid - 1; }else if( c>0 ){ lwr = mid + 1; |
︙ | ︙ | |||
2755 2756 2757 2758 2759 2760 2761 | ** localauth If enabled, require that HTTP connections from ** 127.0.0.1 be authenticated by password. If ** false, all HTTP requests from localhost have ** unrestricted access to the repository. ** ** main-branch The primary branch for the project. Default: trunk ** | | | > > > | | 2961 2962 2963 2964 2965 2966 2967 2968 2969 2970 2971 2972 2973 2974 2975 2976 2977 2978 2979 2980 | ** localauth If enabled, require that HTTP connections from ** 127.0.0.1 be authenticated by password. If ** false, all HTTP requests from localhost have ** unrestricted access to the repository. ** ** main-branch The primary branch for the project. Default: trunk ** ** manifest If set to a true boolean value, automatically create ** (versionable) files "manifest" and "manifest.uuid" in every checkout. ** Optionally use combinations of characters 'r' ** for "manifest", 'u' for "manifest.uuid" and 't' for ** "manifest.tags". The SQLite and Fossil repositories ** both require manifests. Default: off. ** ** max-loadavg Some CPU-intensive web pages (ex: /zip, /tarball, /blame) ** are disallowed if the system load average goes above this ** value. "0.0" means no limit. This only works on unix. ** Only local settings of this value make a difference since ** when running as a web-server, Fossil does not open the ** global configuration database. |
︙ | ︙ | |||
2852 2853 2854 2855 2856 2857 2858 2859 2860 2861 2862 2863 2864 2865 2866 2867 2868 2869 2870 2871 2872 2873 2874 2875 2876 2877 2878 2879 2880 2881 | ** th1-setup This is the setup script to be evaluated after creating ** (versionable) and initializing the TH1 interpreter. By default, this ** is empty and no extra setup is performed. ** ** th1-uri-regexp Specify which URI's are allowed in HTTP requests from ** (versionable) TH1 scripts. If empty, no HTTP requests are allowed ** whatsoever. The default is an empty string. ** ** web-browser A shell command used to launch your preferred ** web browser when given a URL as an argument. ** Defaults to "start" on windows, "open" on Mac, ** and "firefox" on Unix. ** ** Options: ** --global set or unset the given property globally instead of ** setting or unsetting it for the open repository only. ** ** See also: configuration */ void setting_cmd(void){ int i; int globalFlag = find_option("global","g",0)!=0; int unsetFlag = g.argv[1][0]=='u'; db_open_config(1, 0); if( !globalFlag ){ db_find_and_open_repository(OPEN_ANY_SCHEMA | OPEN_OK_NOT_FOUND, 0); } if( !g.repositoryOpen ){ globalFlag = 1; } | > > > > > > > > > | 3061 3062 3063 3064 3065 3066 3067 3068 3069 3070 3071 3072 3073 3074 3075 3076 3077 3078 3079 3080 3081 3082 3083 3084 3085 3086 3087 3088 3089 3090 3091 3092 3093 3094 3095 3096 3097 3098 3099 | ** th1-setup This is the setup script to be evaluated after creating ** (versionable) and initializing the TH1 interpreter. By default, this ** is empty and no extra setup is performed. ** ** th1-uri-regexp Specify which URI's are allowed in HTTP requests from ** (versionable) TH1 scripts. If empty, no HTTP requests are allowed ** whatsoever. The default is an empty string. ** ** uv-sync If true, automatically send unversioned files as part ** of a "fossil clone" or "fossil sync" command. The ** default is false, in which case the -u option is ** needed to clone or sync unversioned files. ** ** web-browser A shell command used to launch your preferred ** web browser when given a URL as an argument. ** Defaults to "start" on windows, "open" on Mac, ** and "firefox" on Unix. ** ** Options: ** --global set or unset the given property globally instead of ** setting or unsetting it for the open repository only. ** ** --exact only consider exact name matches. ** ** See also: configuration */ void setting_cmd(void){ int i; int globalFlag = find_option("global","g",0)!=0; int exactFlag = find_option("exact",0,0)!=0; int unsetFlag = g.argv[1][0]=='u'; verify_all_options(); db_open_config(1, 0); if( !globalFlag ){ db_find_and_open_repository(OPEN_ANY_SCHEMA | OPEN_OK_NOT_FOUND, 0); } if( !g.repositoryOpen ){ globalFlag = 1; } |
︙ | ︙ | |||
2897 2898 2899 2900 2901 2902 2903 | if( g.argc==2 ){ for(i=0; aSetting[i].name; i++){ print_setting(&aSetting[i]); } }else if( g.argc==3 || g.argc==4 ){ const char *zName = g.argv[2]; int n = (int)strlen(zName); | | | 3115 3116 3117 3118 3119 3120 3121 3122 3123 3124 3125 3126 3127 3128 3129 | if( g.argc==2 ){ for(i=0; aSetting[i].name; i++){ print_setting(&aSetting[i]); } }else if( g.argc==3 || g.argc==4 ){ const char *zName = g.argv[2]; int n = (int)strlen(zName); const Setting *pSetting = db_find_setting(zName, !exactFlag); if( pSetting==0 ){ fossil_fatal("no such setting: %s", zName); } if( globalFlag && fossil_strcmp(pSetting->name, "manifest")==0 ){ fossil_fatal("cannot set 'manifest' globally"); } if( unsetFlag || g.argc==4 ){ |
︙ | ︙ | |||
2930 2931 2932 2933 2934 2935 2936 | }else{ db_set(pSetting->name, g.argv[3], globalFlag); } if( isManifest && g.localOpen ){ manifest_to_disk(db_lget_int("checkout", 0)); } }else{ | | > > > > > | 3148 3149 3150 3151 3152 3153 3154 3155 3156 3157 3158 3159 3160 3161 3162 3163 3164 3165 3166 3167 | }else{ db_set(pSetting->name, g.argv[3], globalFlag); } if( isManifest && g.localOpen ){ manifest_to_disk(db_lget_int("checkout", 0)); } }else{ while( pSetting->name ){ if( exactFlag ){ if( fossil_strcmp(pSetting->name,zName)!=0 ) break; }else{ if( fossil_strncmp(pSetting->name,zName,n)!=0 ) break; } print_setting(pSetting); pSetting++; } } }else{ usage("?PROPERTY? ?VALUE? ?-global?"); } |
︙ | ︙ | |||
2969 2970 2971 2972 2973 2974 2975 | } rSpan /= 356.24; /* Convert units to years */ return sqlite3_mprintf("%.1f years", rSpan); } /* ** COMMAND: test-timespan | | | | | 3192 3193 3194 3195 3196 3197 3198 3199 3200 3201 3202 3203 3204 3205 3206 3207 3208 3209 3210 3211 3212 3213 3214 3215 3216 3217 3218 3219 3220 3221 3222 3223 3224 3225 3226 3227 3228 3229 3230 3231 3232 3233 3234 3235 3236 3237 3238 3239 3240 3241 3242 | } rSpan /= 356.24; /* Convert units to years */ return sqlite3_mprintf("%.1f years", rSpan); } /* ** COMMAND: test-timespan ** ** Usage: %fossil test-timespan TIMESTAMP ** ** Print the approximate span of time from now to TIMESTAMP. */ void test_timespan_cmd(void){ double rDiff; if( g.argc!=3 ) usage("TIMESTAMP"); sqlite3_open(":memory:", &g.db); rDiff = db_double(0.0, "SELECT julianday('now') - julianday(%Q)", g.argv[2]); fossil_print("Time differences: %s\n", db_timespan_name(rDiff)); sqlite3_close(g.db); g.db = 0; } /* ** COMMAND: test-without-rowid ** ** Usage: %fossil test-without-rowid FILENAME... ** ** Change the Fossil repository FILENAME to make use of the WITHOUT ROWID ** optimization. FILENAME can also be the ~/.fossil file or a local ** .fslckout or _FOSSIL_ file. ** ** The purpose of this command is for testing the WITHOUT ROWID capabilities ** of SQLite. There is no big advantage to using WITHOUT ROWID in Fossil. ** ** Options: ** --dryrun | -n No changes. Just print what would happen. */ void test_without_rowid(void){ int i, j; Stmt q; Blob allSql; int dryRun = find_option("dry-run", "n", 0)!=0; for(i=2; i<g.argc; i++){ db_open_or_attach(g.argv[i], "main"); blob_init(&allSql, "BEGIN;\n", -1); db_prepare(&q, "SELECT name, sql FROM main.sqlite_master " " WHERE type='table' AND sql NOT LIKE '%%WITHOUT ROWID%%'" " AND name IN ('global_config','shun','concealed','config'," " 'plink','tagxref','backlink','vcache');" ); |
︙ | ︙ | |||
3060 3061 3062 3063 3064 3065 3066 | ** Make sure the adminlog table exists. Create it if it does not */ void create_admin_log_table(void){ static int once = 0; if( once ) return; once = 1; db_multi_exec( | | | | | 3283 3284 3285 3286 3287 3288 3289 3290 3291 3292 3293 3294 3295 3296 3297 3298 3299 3300 3301 3302 3303 | ** Make sure the adminlog table exists. Create it if it does not */ void create_admin_log_table(void){ static int once = 0; if( once ) return; once = 1; db_multi_exec( "CREATE TABLE IF NOT EXISTS repository.admin_log(\n" " id INTEGER PRIMARY KEY,\n" " time INTEGER, -- Seconds since 1970\n" " page TEXT, -- path of page\n" " who TEXT, -- User who made the change\n" " what TEXT -- What changed\n" ")" ); } /* ** Write a message into the admin_event table, if admin logging is ** enabled via the admin-log configuration option. */ |
︙ | ︙ |
Changes to src/delta.c.
︙ | ︙ | |||
243 244 245 246 247 248 249 | z += 4; } #elif defined(_MSC_VER) && _MSC_VER>=1300 while( z<zEnd ){ sum += _byteswap_ulong(*(unsigned*)z); z += 4; } | | | 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 | z += 4; } #elif defined(_MSC_VER) && _MSC_VER>=1300 while( z<zEnd ){ sum += _byteswap_ulong(*(unsigned*)z); z += 4; } #else unsigned sum0 = 0; unsigned sum1 = 0; unsigned sum2 = 0; while(N >= 16){ sum0 += ((unsigned)z[0] + z[4] + z[8] + z[12]); sum1 += ((unsigned)z[1] + z[5] + z[9] + z[13]); sum2 += ((unsigned)z[2] + z[6] + z[10]+ z[14]); |
︙ | ︙ | |||
375 376 377 378 379 380 381 382 | } /* Compute the hash table used to locate matching sections in the ** source file. */ nHash = lenSrc/NHASH; collide = fossil_malloc( nHash*2*sizeof(int) ); landmark = &collide[nHash]; | > < < | 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 | } /* Compute the hash table used to locate matching sections in the ** source file. */ nHash = lenSrc/NHASH; collide = fossil_malloc( nHash*2*sizeof(int) ); memset(collide, -1, nHash*2*sizeof(int)); landmark = &collide[nHash]; for(i=0; i<lenSrc-NHASH; i+=NHASH){ int hv = hash_once(&zSrc[i]) % nHash; collide[i/NHASH] = landmark[hv]; landmark[hv] = i/NHASH; } /* Begin scanning the target file and generating copy commands and |
︙ | ︙ |
Changes to src/deltacmd.c.
︙ | ︙ | |||
27 28 29 30 31 32 33 | */ int blob_delta_create(Blob *pOriginal, Blob *pTarget, Blob *pDelta){ const char *zOrig, *zTarg; int lenOrig, lenTarg; int len; char *zRes; blob_zero(pDelta); | | | | | | | | 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 | */ int blob_delta_create(Blob *pOriginal, Blob *pTarget, Blob *pDelta){ const char *zOrig, *zTarg; int lenOrig, lenTarg; int len; char *zRes; blob_zero(pDelta); zOrig = blob_materialize(pOriginal); lenOrig = blob_size(pOriginal); zTarg = blob_materialize(pTarget); lenTarg = blob_size(pTarget); blob_resize(pDelta, lenTarg+16); zRes = blob_materialize(pDelta); len = delta_create(zOrig, lenOrig, zTarg, lenTarg, zRes); blob_resize(pDelta, len); return 0; } /* ** COMMAND: test-delta-create ** ** Usage: %fossil test-delta-create FILE1 FILE2 DELTA ** ** Create and output a delta that carries FILE1 into FILE2. ** Store the result in DELTA. */ void delta_create_cmd(void){ Blob orig, target, delta; if( g.argc!=5 ){ usage("ORIGIN TARGET DELTA"); } if( blob_read_from_file(&orig, g.argv[2])<0 ){ fossil_fatal("cannot read %s", g.argv[2]); } if( blob_read_from_file(&target, g.argv[3])<0 ){ fossil_fatal("cannot read %s", g.argv[3]); } blob_delta_create(&orig, &target, &delta); if( blob_write_to_file(&delta, g.argv[4])<blob_size(&delta) ){ fossil_fatal("cannot write %s", g.argv[4]); } blob_reset(&orig); blob_reset(&target); blob_reset(&delta); } /* |
︙ | ︙ | |||
83 84 85 86 87 88 89 | int nCopy = 0; int nInsert = 0; int sz1, sz2, sz3; if( g.argc!=4 ){ usage("ORIGIN TARGET"); } if( blob_read_from_file(&orig, g.argv[2])<0 ){ | | | | 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 | int nCopy = 0; int nInsert = 0; int sz1, sz2, sz3; if( g.argc!=4 ){ usage("ORIGIN TARGET"); } if( blob_read_from_file(&orig, g.argv[2])<0 ){ fossil_fatal("cannot read %s", g.argv[2]); } if( blob_read_from_file(&target, g.argv[3])<0 ){ fossil_fatal("cannot read %s", g.argv[3]); } blob_delta_create(&orig, &target, &delta); delta_analyze(blob_buffer(&delta), blob_size(&delta), &nCopy, &nInsert); sz1 = blob_size(&orig); sz2 = blob_size(&target); sz3 = blob_size(&delta); blob_reset(&orig); |
︙ | ︙ | |||
151 152 153 154 155 156 157 | */ void delta_apply_cmd(void){ Blob orig, target, delta; if( g.argc!=5 ){ usage("ORIGIN DELTA TARGET"); } if( blob_read_from_file(&orig, g.argv[2])<0 ){ | | | | | 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 | */ void delta_apply_cmd(void){ Blob orig, target, delta; if( g.argc!=5 ){ usage("ORIGIN DELTA TARGET"); } if( blob_read_from_file(&orig, g.argv[2])<0 ){ fossil_fatal("cannot read %s", g.argv[2]); } if( blob_read_from_file(&delta, g.argv[3])<0 ){ fossil_fatal("cannot read %s", g.argv[3]); } blob_delta_apply(&orig, &delta, &target); if( blob_write_to_file(&target, g.argv[4])<blob_size(&target) ){ fossil_fatal("cannot write %s", g.argv[4]); } blob_reset(&orig); blob_reset(&target); blob_reset(&delta); } |
︙ | ︙ |
Changes to src/descendants.c.
︙ | ︙ | |||
194 195 196 197 198 199 200 | " generation INTEGER PRIMARY KEY);" "DELETE FROM ancestor;" "WITH RECURSIVE g(x,i) AS (" " VALUES(%d,1)" " UNION ALL" " SELECT plink.pid, g.i+1 FROM plink, g" " WHERE plink.cid=g.x AND plink.isprim)" | | | | | 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 | " generation INTEGER PRIMARY KEY);" "DELETE FROM ancestor;" "WITH RECURSIVE g(x,i) AS (" " VALUES(%d,1)" " UNION ALL" " SELECT plink.pid, g.i+1 FROM plink, g" " WHERE plink.cid=g.x AND plink.isprim)" "INSERT INTO ancestor(rid,generation) SELECT x,i FROM g;", rid ); } /* ** Compute the "mtime" of the file given whose blob.rid is "fid" that ** is part of check-in "vid". The mtime will be the mtime on vid or ** some ancestor of vid where fid first appears. */ int mtime_of_manifest_file( int vid, /* The check-in that contains fid */ int fid, /* The id of the file whose check-in time is sought */ i64 *pMTime /* Write result here */ ){ static int prevVid = -1; static Stmt q; if( prevVid!=vid ){ prevVid = vid; db_multi_exec("CREATE TEMP TABLE IF NOT EXISTS ok(rid INTEGER PRIMARY KEY);" "DELETE FROM ok;"); compute_ancestors(vid, 100000000, 1); } db_static_prepare(&q, "SELECT (max(event.mtime)-2440587.5)*86400 FROM mlink, event" " WHERE mlink.mid=event.objid" " AND +mlink.mid IN ok" " AND mlink.fid=:fid"); |
︙ | ︙ | |||
453 454 455 456 457 458 459 | int showAll = P("all")!=0; int showClosed = P("closed")!=0; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } if( !showAll ){ | | | | | 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 | int showAll = P("all")!=0; int showClosed = P("closed")!=0; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } if( !showAll ){ style_submenu_element("All", "leaves?all"); } if( !showClosed ){ style_submenu_element("Closed", "leaves?closed"); } if( showClosed || showAll ){ style_submenu_element("Open", "leaves"); } style_header("Leaves"); login_anonymous_available(); #if 0 style_sidebox_begin("Nomenclature:", "33%"); @ <ol> @ <li> A <div class="sideboxDescribed">leaf</div> |
︙ | ︙ |
Changes to src/diff.c.
︙ | ︙ | |||
111 112 113 114 115 116 117 | int *aEdit; /* Array of copy/delete/insert triples */ int nEdit; /* Number of integers (3x num of triples) in aEdit[] */ int nEditAlloc; /* Space allocated for aEdit[] */ DLine *aFrom; /* File on left side of the diff */ int nFrom; /* Number of lines in aFrom[] */ DLine *aTo; /* File on right side of the diff */ int nTo; /* Number of lines in aTo[] */ | | > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > | > < < < < < | | | > > > | | < | | | < < > > > > | > | | < < < < < < < < < < | > | | > | > | < > > > | 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 | int *aEdit; /* Array of copy/delete/insert triples */ int nEdit; /* Number of integers (3x num of triples) in aEdit[] */ int nEditAlloc; /* Space allocated for aEdit[] */ DLine *aFrom; /* File on left side of the diff */ int nFrom; /* Number of lines in aFrom[] */ DLine *aTo; /* File on right side of the diff */ int nTo; /* Number of lines in aTo[] */ int (*same_fn)(const DLine*,const DLine*); /* comparison function */ }; /* ** Count the number of lines in the input string. Include the last line ** in the count even if it lacks the \n terminator. If an empty string ** is specified, the number of lines is zero. For the purposes of this ** function, a string is considered empty if it contains no characters ** -OR- it contains only NUL characters. */ static int count_lines( const char *z, int n, int *pnLine ){ int nLine; const char *zNL, *z2; for(nLine=0, z2=z; (zNL = strchr(z2,'\n'))!=0; z2=zNL+1, nLine++){} if( z2[0]!='\0' ){ nLine++; do{ z2++; }while( z2[0]!='\0' ); } if( n!=(int)(z2-z) ) return 0; if( pnLine ) *pnLine = nLine; return 1; } /* ** Return an array of DLine objects containing a pointer to the ** start of each line and a hash of that line. The lower ** bits of the hash store the length of each line. ** ** Trailing whitespace is removed from each line. 2010-08-20: Not any ** more. If trailing whitespace is ignored, the "patch" command gets ** confused by the diff output. Ticket [a9f7b23c2e376af5b0e5b] ** ** Return 0 if the file is binary or contains a line that is ** too long. ** ** Profiling show that in most cases this routine consumes the bulk of ** the CPU time on a diff. */ static DLine *break_into_lines( const char *z, int n, int *pnLine, u64 diffFlags ){ int nLine, i, k, nn, s, x; unsigned int h, h2; DLine *a; const char *zNL; if( count_lines(z, n, &nLine)==0 ){ return 0; } assert( nLine>0 || z[0]=='\0' ); a = fossil_malloc( sizeof(a[0])*nLine ); memset(a, 0, sizeof(a[0])*nLine); if( nLine==0 ){ *pnLine = 0; return a; } i = 0; do{ zNL = strchr(z,'\n'); if( zNL==0 ) zNL = z+n; nn = (int)(zNL - z); if( nn>LENGTH_MASK ){ fossil_free(a); return 0; } a[i].z = z; k = nn; if( diffFlags & DIFF_STRIP_EOLCR ){ if( k>0 && z[k-1]=='\r' ){ k--; } } a[i].n = k; s = 0; if( diffFlags & DIFF_IGNORE_EOLWS ){ while( k>0 && fossil_isspace(z[k-1]) ){ k--; } } if( (diffFlags & DIFF_IGNORE_ALLWS)==DIFF_IGNORE_ALLWS ){ int numws = 0; while( s<k && fossil_isspace(z[s]) ){ s++; } for(h=0, x=s; x<k; x++){ char c = z[x]; if( fossil_isspace(c) ){ ++numws; }else{ h += c; h *= 0x9e3779b1; } } k -= numws; }else{ for(h=0, x=s; x<k; x++){ h += z[x]; h *= 0x9e3779b1; } } a[i].indent = s; a[i].h = h = (h<<LENGTH_MASK_SZ) | (k-s); h2 = h % nLine; a[i].iNext = a[h2].iHash; a[h2].iHash = i+1; z += nn+1; n -= nn+1; i++; }while( zNL[0]!='\0' && zNL[1]!='\0' ); assert( i==nLine ); /* Return results */ *pnLine = nLine; return a; } /* |
︙ | ︙ | |||
956 957 958 959 960 961 962 | while( nB>0 && fossil_isspace(zB[0]) ){ nB--; zB++; } while( nB>0 && fossil_isspace(zB[nB-1]) ){ nB--; } if( nA>250 ) nA = 250; if( nB>250 ) nB = 250; avg = (nA+nB)/2; if( avg==0 ) return 0; if( nA==nB && memcmp(zA, zB, nA)==0 ) return 0; | | | | | 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 | while( nB>0 && fossil_isspace(zB[0]) ){ nB--; zB++; } while( nB>0 && fossil_isspace(zB[nB-1]) ){ nB--; } if( nA>250 ) nA = 250; if( nB>250 ) nB = 250; avg = (nA+nB)/2; if( avg==0 ) return 0; if( nA==nB && memcmp(zA, zB, nA)==0 ) return 0; memset(aFirst, 0xff, sizeof(aFirst)); zA--; zB--; /* Make both zA[] and zB[] 1-indexed */ for(i=nB; i>0; i--){ c = (unsigned char)zB[i]; aNext[i] = aFirst[c]; aFirst[c] = i; } best = 0; for(i=1; i<=nA-best; i++){ c = (unsigned char)zA[i]; for(j=aFirst[c]; j<nB-best && memcmp(&zA[i],&zB[j],best)==0; j = aNext[j]){ int limit = minInt(nA-i, nB-j); for(k=best; k<=limit && zA[k+i]==zB[k+j]; k++){} if( k>best ) best = k; } } score = (best>avg) ? 0 : (avg - best)*100/avg; #if 0 fprintf(stderr, "A: [%.*s]\nB: [%.*s]\nbest=%d avg=%d score=%d\n", |
︙ | ︙ | |||
1041 1042 1043 1044 1045 1046 1047 | if( nLeft*nRight>100000 ){ memset(aM, 4, mnLen); if( nLeft>mnLen ) memset(aM+mnLen, 1, nLeft-mnLen); if( nRight>mnLen ) memset(aM+mnLen, 2, nRight-mnLen); return aM; } | | | 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 | if( nLeft*nRight>100000 ){ memset(aM, 4, mnLen); if( nLeft>mnLen ) memset(aM+mnLen, 1, nLeft-mnLen); if( nRight>mnLen ) memset(aM+mnLen, 2, nRight-mnLen); return aM; } if( nRight < count(aBuf)-1 ){ pToFree = 0; a = aBuf; }else{ a = pToFree = fossil_malloc( sizeof(a[0])*(nRight+1) ); } /* Compute the best alignment */ |
︙ | ︙ | |||
2088 2089 2090 2091 2092 2093 2094 | } /* ** The input pParent is the next most recent ancestor of the file ** being annotated. Do another step of the annotation. Return true ** if additional annotation is required. */ | | > > > > > | 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 | } /* ** The input pParent is the next most recent ancestor of the file ** being annotated. Do another step of the annotation. Return true ** if additional annotation is required. */ static int annotation_step( Annotator *p, Blob *pParent, int iVers, u64 diffFlags ){ int i, j; int lnTo; /* Prepare the parent file to be diffed */ p->c.aFrom = break_into_lines(blob_str(pParent), blob_size(pParent), &p->c.nFrom, diffFlags); if( p->c.aFrom==0 ){ |
︙ | ︙ | |||
2132 2133 2134 2135 2136 2137 2138 | /* Return no errors */ return 0; } /* Annotation flags (any DIFF flag can be used as Annotation flag as well) */ | | | | 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 | /* Return no errors */ return 0; } /* Annotation flags (any DIFF flag can be used as Annotation flag as well) */ #define ANN_FILE_VERS (((u64)0x20)<<32) /* File vers not commit vers */ #define ANN_FILE_ANCEST (((u64)0x40)<<32) /* Prefer checkins in the ANCESTOR */ /* ** Compute a complete annotation on a file. The file is identified ** by its filename number (filename.fnid) and check-in (mlink.mid). */ static void annotate_file( Annotator *p, /* The annotator */ |
︙ | ︙ | |||
2279 2280 2281 2282 2283 2284 2285 | unsigned clr1, clr2, clr; int bBlame = g.zPath[0]!='a';/* True for BLAME output. False for ANNOTATE. */ /* Gather query parameters */ showLog = atoi(PD("log","1")); login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } | | | 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 | unsigned clr1, clr2, clr; int bBlame = g.zPath[0]!='a';/* True for BLAME output. False for ANNOTATE. */ /* Gather query parameters */ showLog = atoi(PD("log","1")); login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } if( exclude_spiders() ) return; load_control(); mid = name_to_typed_rid(PD("checkin","0"),"ci"); zFilename = P("filename"); fnid = db_int(0, "SELECT fnid FROM filename WHERE name=%Q", zFilename); if( mid==0 || fnid==0 ){ fossil_redirect_home(); } iLimit = atoi(PD("limit","20")); if( P("filevers") ) annFlags |= ANN_FILE_VERS; |
︙ | ︙ | |||
2313 2314 2315 2316 2317 2318 2319 | url_add_parameter(&url, "filename", zFilename); if( iLimit!=20 ){ url_add_parameter(&url, "limit", sqlite3_mprintf("%d", iLimit)); } url_add_parameter(&url, "log", showLog ? "1" : "0"); if( ignoreWs ){ url_add_parameter(&url, "w", ""); | | | | | | < | < | | | | | | 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 | url_add_parameter(&url, "filename", zFilename); if( iLimit!=20 ){ url_add_parameter(&url, "limit", sqlite3_mprintf("%d", iLimit)); } url_add_parameter(&url, "log", showLog ? "1" : "0"); if( ignoreWs ){ url_add_parameter(&url, "w", ""); style_submenu_element("Show Whitespace Changes", "%s", url_render(&url, "w", 0, 0, 0)); }else{ style_submenu_element("Ignore Whitespace", "%s", url_render(&url, "w", "", 0, 0)); } if( showLog ){ style_submenu_element("Hide Log", "%s", url_render(&url, "log", "0", 0, 0)); }else{ style_submenu_element("Show Log", "%s", url_render(&url, "log", "1", 0, 0)); } if( ann.bLimit ){ char *z1, *z2; style_submenu_element("All Ancestors", "%s", url_render(&url, "limit", "-1", 0, 0)); z1 = sqlite3_mprintf("%d Ancestors", iLimit+20); z2 = sqlite3_mprintf("%d", iLimit+20); style_submenu_element(z1, "%s", url_render(&url, "limit", z2, 0, 0)); } if( iLimit>20 ){ style_submenu_element("20 Ancestors", "%s", url_render(&url, "limit", "20", 0, 0)); } if( skin_detail_boolean("white-foreground") ){ clr1 = 0xa04040; clr2 = 0x4059a0; }else{ clr1 = 0xffb5b5; /* Recent changes: red (hot) */ clr2 = 0xb5e0ff; /* Older changes: blue (cold) */ |
︙ | ︙ | |||
2375 2376 2377 2378 2379 2380 2381 | p->zFUuid,p[-1].zFUuid); @ %z(zLink)[diff-to-previous]</a> } } #endif } @ </ol> | | | 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 | p->zFUuid,p[-1].zFUuid); @ %z(zLink)[diff-to-previous]</a> } } #endif } @ </ol> @ <hr /> } if( !ann.bLimit ){ @ <h2>Origin for each line in @ %z(href("%R/finfo?name=%h&ci=%!S", zFilename, zCI))%h(zFilename)</a> @ from check-in %z(href("%R/info/%!S",zCI))%S(zCI)</a>:</h2> iLimit = ann.nVers+10; }else{ |
︙ | ︙ | |||
2441 2442 2443 2444 2445 2446 2447 | ** ** Output the text of a file with markings to show when each line of ** the file was last modified. The "annotate" command shows line numbers ** and omits the username. The "blame" and "praise" commands show the user ** who made each check-in and omits the line number. ** ** Options: | | > | | | | | 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 | ** ** Output the text of a file with markings to show when each line of ** the file was last modified. The "annotate" command shows line numbers ** and omits the username. The "blame" and "praise" commands show the user ** who made each check-in and omits the line number. ** ** Options: ** --filevers Show file version numbers rather than ** check-in versions ** -l|--log List all versions analyzed ** -n|--limit N Only look backwards in time by N versions ** -w|--ignore-all-space Ignore white space when comparing lines ** -Z|--ignore-trailing-space Ignore whitespace at line end ** ** See also: info, finfo, timeline */ void annotate_cmd(void){ int fnid; /* Filename ID */ int fid; /* File instance ID */ int mid; /* Manifest where file was checked in */ |
︙ | ︙ |
Changes to src/diff.tcl.
︙ | ︙ | |||
225 226 227 228 229 230 231 | } enableSync y } wm withdraw . wm title . $CFG(TITLE) wm iconname . $CFG(TITLE) | > > > > > | > > > > > | | 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 | } enableSync y } wm withdraw . wm title . $CFG(TITLE) wm iconname . $CFG(TITLE) # Keystroke bindings for on the top-level window for navigation and # control also fire when those same keystrokes are pressed in the # Search entry box. Disable them, to prevent the diff screen from # disappearing abruptly and unexpectedly when searching for "q". # # bind . <q> exit # bind . <p> {catch searchPrev; break} # bind . <n> {catch searchNext; break} # bind . <Escape><Escape> exit bind . <Destroy> {after 0 exit} bind . <Tab> {cycleDiffs; break} bind . <<PrevWindow>> {cycleDiffs 1; break} bind . <Control-f> {searchOnOff; break} bind . <Control-g> {catch searchNext; break} bind . <Return> { event generate bb.files <1> event generate .bb.files <ButtonRelease-1> break } foreach {key axis args} { Up y {scroll -5 units} k y {scroll -5 units} Down y {scroll 5 units} |
︙ | ︙ | |||
257 258 259 260 261 262 263 264 265 266 267 268 269 270 | } { bind . <$key> "scroll-$axis $args; break" bind . <Shift-$key> continue } frame .bb ::ttk::menubutton .bb.files -text "Files" toplevel .wfiles wm withdraw .wfiles update idletasks wm transient .wfiles . wm overrideredirect .wfiles 1 listbox .wfiles.lb -width 0 -height $CFG(LB_HEIGHT) -activestyle none \ -yscroll {.wfiles.sb set} | > > > > | 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 | } { bind . <$key> "scroll-$axis $args; break" bind . <Shift-$key> continue } frame .bb ::ttk::menubutton .bb.files -text "Files" if {[tk windowingsystem] eq "win32"} { ::ttk::style theme use winnative .bb.files configure -padding {20 1 10 2} } toplevel .wfiles wm withdraw .wfiles update idletasks wm transient .wfiles . wm overrideredirect .wfiles 1 listbox .wfiles.lb -width 0 -height $CFG(LB_HEIGHT) -activestyle none \ -yscroll {.wfiles.sb set} |
︙ | ︙ | |||
356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 | if {$x(-column)==1} { grid config .lnB -column 0 grid config .txtB -column 1 .txtB tag config add -background $CFG(RM_BG) grid config .lnA -column 3 grid config .txtA -column 4 .txtA tag config rm -background $CFG(ADD_BG) } else { grid config .lnA -column 0 grid config .txtA -column 1 .txtA tag config rm -background $CFG(RM_BG) grid config .lnB -column 3 grid config .txtB -column 4 .txtB tag config add -background $CFG(ADD_BG) } .mkr config -state normal set clt [.mkr search -all < 1.0 end] set cgt [.mkr search -all > 1.0 end] foreach c $clt {.mkr replace $c "$c +1 chars" >} foreach c $cgt {.mkr replace $c "$c +1 chars" <} .mkr config -state disabled } ::ttk::button .bb.quit -text {Quit} -command exit ::ttk::button .bb.invert -text {Invert} -command invertDiff ::ttk::button .bb.save -text {Save As...} -command saveDiff pack .bb.quit .bb.invert -side left if {$fossilcmd!=""} {pack .bb.save -side left} | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 | if {$x(-column)==1} { grid config .lnB -column 0 grid config .txtB -column 1 .txtB tag config add -background $CFG(RM_BG) grid config .lnA -column 3 grid config .txtA -column 4 .txtA tag config rm -background $CFG(ADD_BG) .bb.invert config -text Uninvert } else { grid config .lnA -column 0 grid config .txtA -column 1 .txtA tag config rm -background $CFG(RM_BG) grid config .lnB -column 3 grid config .txtB -column 4 .txtB tag config add -background $CFG(ADD_BG) .bb.invert config -text Invert } .mkr config -state normal set clt [.mkr search -all < 1.0 end] set cgt [.mkr search -all > 1.0 end] foreach c $clt {.mkr replace $c "$c +1 chars" >} foreach c $cgt {.mkr replace $c "$c +1 chars" <} .mkr config -state disabled } proc searchOnOff {} { if {[info exists ::search]} { unset ::search .txtA tag remove search 1.0 end .txtB tag remove search 1.0 end pack forget .bb.sframe } else { set ::search .txtA if {![winfo exists .bb.sframe]} { frame .bb.sframe ::ttk::entry .bb.sframe.e -width 10 pack .bb.sframe.e -side left -fill y -expand 1 bind .bb.sframe.e <Return> {searchNext; break} ::ttk::button .bb.sframe.nx -text \u2193 -width 1 -command searchNext ::ttk::button .bb.sframe.pv -text \u2191 -width 1 -command searchPrev tk_optionMenu .bb.sframe.typ ::search_type \ Exact {No Case} {RegExp} {Whole Word} .bb.sframe.typ config -width 10 set ::search_type Exact pack .bb.sframe.nx .bb.sframe.pv .bb.sframe.typ -side left } pack .bb.sframe -side left after idle {focus .bb.sframe.e} } } proc searchNext {} {searchStep -forwards +1 1.0 end} proc searchPrev {} {searchStep -backwards -1 end 1.0} proc searchStep {direction incr start stop} { set pattern [.bb.sframe.e get] if {$pattern==""} return set count 0 set w $::search if {"$w"==".txtA"} {set other .txtB} {set other .txtA} if {[lsearch [$w mark names] search]<0} { $w mark set search $start } switch $::search_type { Exact {set st -exact} {No Case} {set st -nocase} {RegExp} {set st -regexp} {Whole Word} {set st -regexp; set pattern \\y$pattern\\y} } set idx [$w search -count count $direction $st -- \ $pattern "search $incr chars" $stop] if {"$idx"==""} { set idx [$other search -count count $direction $st -- $pattern $start $stop] if {"$idx"!=""} { set this $w set w $other set other $this } else { set idx [$w search -count count $direction $st -- $pattern $start $stop] } } $w tag remove search 1.0 end $w mark unset search $other tag remove search 1.0 end $other mark unset search if {"$idx"!=""} { $w mark set search $idx $w yview -pickplace $idx $w tag add search search "$idx +$count chars" $w tag config search -background {#fcc000} } set ::search $w } ::ttk::button .bb.quit -text {Quit} -command exit ::ttk::button .bb.invert -text {Invert} -command invertDiff ::ttk::button .bb.save -text {Save As...} -command saveDiff ::ttk::button .bb.search -text {Search} -command searchOnOff pack .bb.quit .bb.invert -side left if {$fossilcmd!=""} {pack .bb.save -side left} pack .bb.files .bb.search -side left grid rowconfigure . 1 -weight 1 grid columnconfigure . 1 -weight 1 grid columnconfigure . 4 -weight 1 grid .bb -row 0 -columnspan 6 eval grid [cols] -row 1 -sticky nsew grid .sby -row 1 -column 5 -sticky ns grid .sbxA -row 2 -columnspan 2 -sticky ew |
︙ | ︙ |
Changes to src/diffcmd.c.
︙ | ︙ | |||
410 411 412 413 414 415 416 417 418 419 420 421 422 423 | " WHERE vid=%d" " AND (deleted OR chnged OR rid==0)" " ORDER BY pathname /*scan*/", vid ); } db_prepare(&q, "%s", blob_sql_text(&sql)); while( db_step(&q)==SQLITE_ROW ){ const char *zPathname = db_column_text(&q,0); int isDeleted = db_column_int(&q, 1); int isChnged = db_column_int(&q,2); int isNew = db_column_int(&q,3); int srcid = db_column_int(&q, 4); int isLink = db_column_int(&q, 5); | > | 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 | " WHERE vid=%d" " AND (deleted OR chnged OR rid==0)" " ORDER BY pathname /*scan*/", vid ); } db_prepare(&q, "%s", blob_sql_text(&sql)); blob_reset(&sql); while( db_step(&q)==SQLITE_ROW ){ const char *zPathname = db_column_text(&q,0); int isDeleted = db_column_int(&q, 1); int isChnged = db_column_int(&q,2); int isNew = db_column_int(&q,3); int srcid = db_column_int(&q, 4); int isLink = db_column_int(&q, 5); |
︙ | ︙ | |||
771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 | ** the source check-in for the diff operation. If not specified, the ** source check-in is the base check-in for the current check-out. ** ** If the "--to VERSION" option appears, it specifies the check-in from ** which the second version of the file or files is taken. If there is ** no "--to" option then the (possibly edited) files in the current check-out ** are used. ** ** The "-i" command-line option forces the use of the internal diff logic ** rather than any external diff program that might be configured using ** the "setting" command. If no external diff program is configured, then ** the "-i" option is a no-op. The "-i" option converts "gdiff" into "diff". ** ** The "-N" or "--new-file" option causes the complete text of added or ** deleted files to be displayed. ** ** The "--diff-binary" option enables or disables the inclusion of binary files ** when using an external diff program. ** ** The "--binary" option causes files matching the glob PATTERN to be treated ** as binary when considering if they should be used with external diff program. ** This option overrides the "binary-glob" setting. ** ** Options: ** --binary PATTERN Treat files that match the glob PATTERN as binary ** --branch BRANCH Show diff of all changes on BRANCH ** --brief Show filenames only ** --context|-c N Use N lines of context ** --diff-binary BOOL Include binary files when using external commands ** --exec-abs-paths Force absolute path names with external commands. ** --exec-rel-paths Force relative path names with external commands. ** --from|-r VERSION Select VERSION as source for the diff ** --internal|-i Use internal diff logic ** --side-by-side|-y Side-by-side diff | > > > > > | 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 | ** the source check-in for the diff operation. If not specified, the ** source check-in is the base check-in for the current check-out. ** ** If the "--to VERSION" option appears, it specifies the check-in from ** which the second version of the file or files is taken. If there is ** no "--to" option then the (possibly edited) files in the current check-out ** are used. ** ** The "--checkin VERSION" option shows the changes made by ** check-in VERSION relative to its primary parent. ** ** The "-i" command-line option forces the use of the internal diff logic ** rather than any external diff program that might be configured using ** the "setting" command. If no external diff program is configured, then ** the "-i" option is a no-op. The "-i" option converts "gdiff" into "diff". ** ** The "-N" or "--new-file" option causes the complete text of added or ** deleted files to be displayed. ** ** The "--diff-binary" option enables or disables the inclusion of binary files ** when using an external diff program. ** ** The "--binary" option causes files matching the glob PATTERN to be treated ** as binary when considering if they should be used with external diff program. ** This option overrides the "binary-glob" setting. ** ** Options: ** --binary PATTERN Treat files that match the glob PATTERN as binary ** --branch BRANCH Show diff of all changes on BRANCH ** --brief Show filenames only ** --checkin VERSION Show diff of all changes in VERSION ** --command PROG External diff program - overrides "diff-command" ** --context|-c N Use N lines of context ** --diff-binary BOOL Include binary files when using external commands ** --exec-abs-paths Force absolute path names with external commands. ** --exec-rel-paths Force relative path names with external commands. ** --from|-r VERSION Select VERSION as source for the diff ** --internal|-i Use internal diff logic ** --side-by-side|-y Side-by-side diff |
︙ | ︙ | |||
814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 | */ void diff_cmd(void){ int isGDiff; /* True for gdiff. False for normal diff */ int isInternDiff; /* True for internal diff */ int verboseFlag; /* True if -v or --verbose flag is used */ const char *zFrom; /* Source version number */ const char *zTo; /* Target version number */ const char *zBranch; /* Branch to diff */ const char *zDiffCmd = 0; /* External diff command. NULL for internal diff */ const char *zBinGlob = 0; /* Treat file names matching this as binary */ int fIncludeBinary = 0; /* Include binary files for external diff */ int againstUndo = 0; /* Diff against files in the undo buffer */ u64 diffFlags = 0; /* Flags to control the DIFF */ FileDirList *pFileDir = 0; /* Restrict the diff to these files */ if( find_option("tk",0,0)!=0 ){ diff_tk("diff", 2); return; } isGDiff = g.argv[1][0]=='g'; isInternDiff = find_option("internal","i",0)!=0; zFrom = find_option("from", "r", 1); zTo = find_option("to", 0, 1); zBranch = find_option("branch", 0, 1); againstUndo = find_option("undo",0,0)!=0; diffFlags = diff_options(); verboseFlag = find_option("verbose","v",0)!=0; if( !verboseFlag ){ verboseFlag = find_option("new-file","N",0)!=0; /* deprecated */ } if( verboseFlag ) diffFlags |= DIFF_VERBOSE; | > > | | > | | > > > > | | 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 | */ void diff_cmd(void){ int isGDiff; /* True for gdiff. False for normal diff */ int isInternDiff; /* True for internal diff */ int verboseFlag; /* True if -v or --verbose flag is used */ const char *zFrom; /* Source version number */ const char *zTo; /* Target version number */ const char *zCheckin; /* Check-in version number */ const char *zBranch; /* Branch to diff */ const char *zDiffCmd = 0; /* External diff command. NULL for internal diff */ const char *zBinGlob = 0; /* Treat file names matching this as binary */ int fIncludeBinary = 0; /* Include binary files for external diff */ int againstUndo = 0; /* Diff against files in the undo buffer */ u64 diffFlags = 0; /* Flags to control the DIFF */ FileDirList *pFileDir = 0; /* Restrict the diff to these files */ if( find_option("tk",0,0)!=0 ){ diff_tk("diff", 2); return; } isGDiff = g.argv[1][0]=='g'; isInternDiff = find_option("internal","i",0)!=0; zFrom = find_option("from", "r", 1); zTo = find_option("to", 0, 1); zCheckin = find_option("checkin", 0, 1); zBranch = find_option("branch", 0, 1); againstUndo = find_option("undo",0,0)!=0; diffFlags = diff_options(); verboseFlag = find_option("verbose","v",0)!=0; if( !verboseFlag ){ verboseFlag = find_option("new-file","N",0)!=0; /* deprecated */ } if( verboseFlag ) diffFlags |= DIFF_VERBOSE; if( againstUndo && ( zFrom!=0 || zTo!=0 || zCheckin!=0 || zBranch!=0) ){ fossil_fatal("cannot use --undo together with --from, --to, --checkin," " or --branch"); } if( zBranch ){ if( zTo || zFrom || zCheckin ){ fossil_fatal("cannot use --from, --to, or --checkin with --branch"); } zTo = zBranch; zFrom = mprintf("root:%s", zBranch); } if( zCheckin!=0 && ( zFrom!=0 || zTo!=0 ) ){ fossil_fatal("cannot use --checkin together with --from or --to"); } if( zTo==0 || againstUndo ){ db_must_be_within_tree(); }else if( zFrom==0 ){ fossil_fatal("must use --from if --to is present"); }else{ db_find_and_open_repository(0, 0); } if( !isInternDiff ){ zDiffCmd = find_option("command", 0, 1); if( zDiffCmd==0 ) zDiffCmd = diff_command_external(isGDiff); } zBinGlob = diff_get_binary_glob(); fIncludeBinary = diff_include_binary_files(); determine_exec_relative_option(1); verify_all_options(); if( g.argc>=3 ){ int i; |
︙ | ︙ | |||
879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 | pFileDir[0].zName[1] = 0; break; } pFileDir[i-2].nName = blob_size(&fname); pFileDir[i-2].nUsed = 0; blob_reset(&fname); } } if( againstUndo ){ if( db_lget_int("undo_available",0)==0 ){ fossil_print("No undo or redo is available\n"); return; } diff_against_undo(zDiffCmd, zBinGlob, fIncludeBinary, diffFlags, pFileDir); }else if( zTo==0 ){ diff_against_disk(zFrom, zDiffCmd, zBinGlob, fIncludeBinary, diffFlags, pFileDir); }else{ diff_two_versions(zFrom, zTo, zDiffCmd, zBinGlob, fIncludeBinary, diffFlags, pFileDir); } if( pFileDir ){ int i; for(i=0; pFileDir[i].zName; i++){ if( pFileDir[i].nUsed==0 && strcmp(pFileDir[0].zName,".")!=0 | > > > > > > > > > > > | | 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 | pFileDir[0].zName[1] = 0; break; } pFileDir[i-2].nName = blob_size(&fname); pFileDir[i-2].nUsed = 0; blob_reset(&fname); } } if ( zCheckin!=0 ){ int ridTo = name_to_typed_rid(zCheckin, "ci"); zTo = zCheckin; zFrom = db_text(0, "SELECT uuid FROM blob, plink" " WHERE plink.cid=%d AND plink.isprim AND plink.pid=blob.rid", ridTo); if( zFrom==0 ){ fossil_fatal("check-in %s has no parent", zTo); } } if( againstUndo ){ if( db_lget_int("undo_available",0)==0 ){ fossil_print("No undo or redo is available\n"); return; } diff_against_undo(zDiffCmd, zBinGlob, fIncludeBinary, diffFlags, pFileDir); }else if( zTo==0 ){ diff_against_disk(zFrom, zDiffCmd, zBinGlob, fIncludeBinary, diffFlags, pFileDir); }else{ diff_two_versions(zFrom, zTo, zDiffCmd, zBinGlob, fIncludeBinary, diffFlags, pFileDir); } if( pFileDir ){ int i; for(i=0; pFileDir[i].zName; i++){ if( pFileDir[i].nUsed==0 && strcmp(pFileDir[0].zName,".")!=0 && !file_wd_isdir(g.argv[i+2]) ){ fossil_fatal("not found: '%s'", g.argv[i+2]); } fossil_free(pFileDir[i].zName); } fossil_free(pFileDir); } |
︙ | ︙ |
Added src/dispatch.c.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 | /* ** Copyright (c) 2016 D. Richard Hipp ** ** This program is free software; you can redistribute it and/or ** modify it under the terms of the Simplified BSD License (also ** known as the "2-Clause License" or "FreeBSD License".) ** ** This program is distributed in the hope that it will be useful, ** but without any warranty; without even the implied warranty of ** merchantability or fitness for a particular purpose. ** ** Author contact information: ** drh@hwaci.com ** http://www.hwaci.com/drh/ ** ******************************************************************************* ** ** This file contains code used to map command names (ex: "help", "commit", ** "diff") or webpage names (ex: "/timeline", "/search") into the functions ** that implement those commands and web pages and their associated help ** text. */ #include "config.h" #include <assert.h> #include "dispatch.h" #if INTERFACE /* ** An instance of this object defines everything we need to know about an ** individual command or webpage. */ struct CmdOrPage { const char *zName; /* Name. Webpages start with "/". Commands do not */ void (*xFunc)(void); /* Function that implements the command or webpage */ const char *zHelp; /* Raw help text */ unsigned int eCmdFlags; /* Flags */ }; /*************************************************************************** ** These macros must match similar macros in mkindex.c ** Allowed values for CmdOrPage.eCmdFlags. */ #define CMDFLAG_1ST_TIER 0x0001 /* Most important commands */ #define CMDFLAG_2ND_TIER 0x0002 /* Obscure and seldom used commands */ #define CMDFLAG_TEST 0x0004 /* Commands for testing only */ #define CMDFLAG_WEBPAGE 0x0008 /* Web pages */ #define CMDFLAG_COMMAND 0x0010 /* A command */ /**************************************************************************/ /* Values for the 2nd parameter to dispatch_name_search() */ #define CMDFLAG_ANY 0x0018 /* Match anything */ #define CMDFLAG_PREFIX 0x0020 /* Prefix match is ok */ #endif /* INTERFACE */ /* ** The page_index.h file contains the definition for aCommand[] - an array ** of CmdOrPage objects that defines all available commands and webpages ** known to Fossil. ** ** The entries in aCommand[] are in sorted order by name. Since webpage names ** always begin with "/", all webpage names occur first. The page_index.h file ** also sets the FOSSIL_FIRST_CMD macro to be the *approximate* index ** in aCommand[] of the first command entry. FOSSIL_FIRST_CMD might be ** slightly too low, and so the range FOSSIL_FIRST_CMD...MX_COMMAND might ** contain a few webpage entries at the beginning. ** ** The page_index.h file is generated by the mkindex program which scans all ** source code files looking for header comments on the functions that ** implement command and webpages. */ #include "page_index.h" #define MX_COMMAND count(aCommand) /* ** Given a command or webpage name in zName, find the corresponding CmdOrPage ** object and return a pointer to that object in *ppCmd. ** ** The eType field is CMDFLAG_COMMAND to lookup commands or CMDFLAG_WEBPAGE ** to look up webpages or CMDFLAG_ANY to look for either. If the CMDFLAG_PREFIX ** flag is set, then a prefix match is allowed. ** ** Return values: ** 0: Success. *ppCmd is set to the appropriate CmdOrPage ** 1: Not found. ** 2: Ambiguous. Two or more entries match. */ int dispatch_name_search( const char *zName, /* Look for this name */ unsigned eType, /* CMDFLAGS_* bits */ const CmdOrPage **ppCmd /* Write the matching CmdOrPage object here */ ){ int upr, lwr, mid; int nName = strlen(zName); lwr = 0; upr = MX_COMMAND - 1; while( lwr<=upr ){ int c; mid = (upr+lwr)/2; c = strcmp(zName, aCommand[mid].zName); if( c==0 ){ *ppCmd = &aCommand[mid]; return 0; /* An exact match */ }else if( c<0 ){ upr = mid - 1; }else{ lwr = mid + 1; } } if( (eType & CMDFLAG_PREFIX)!=0 && lwr<MX_COMMAND && strncmp(zName, aCommand[lwr].zName, nName)==0 ){ if( lwr<MX_COMMAND-1 && strncmp(zName, aCommand[lwr+1].zName, nName)==0 ){ return 2; /* Ambiguous prefix */ }else{ *ppCmd = &aCommand[lwr]; return 0; /* Prefix match */ } } return 1; /* Not found */ } /* ** Fill Blob with a space-separated list of all command names that ** match the prefix zPrefix. */ void dispatch_matching_names(const char *zPrefix, Blob *pList){ int i; int nPrefix = (int)strlen(zPrefix); for(i=FOSSIL_FIRST_CMD; i<MX_COMMAND; i++){ if( strncmp(zPrefix, aCommand[i].zName, nPrefix)==0 ){ blob_appendf(pList, " %s", aCommand[i].zName); } } } /* ** Attempt to reformat plain-text help into HTML for display on a webpage. ** ** The HTML output is appended to Blob pHtml, which should already be ** initialized. */ static void help_to_html(const char *zHelp, Blob *pHtml){ char *s; char *d; char *z; /* Transform "%fossil" into just "fossil" */ z = s = d = mprintf("%s", zHelp); while( *s ){ if( *s=='%' && strncmp(s, "%fossil", 7)==0 ){ s++; }else{ *d++ = *s++; } } *d = 0; blob_appendf(pHtml, "<pre>\n%h\n</pre>\n", z); fossil_free(z); } /* ** COMMAND: test-all-help ** ** Usage: %fossil test-all-help ?OPTIONS? ** ** Show help text for commands and pages. Useful for proof-reading. ** Defaults to just the CLI commands. Specify --www to see only the ** web pages, or --everything to see both commands and pages. ** ** Options: ** -e|--everything Show all commands and pages. ** -t|--test Include test- commands ** -w|--www Show WWW pages. ** -h|--html Transform output to HTML. */ void test_all_help_cmd(void){ int i; int mask = CMDFLAG_1ST_TIER | CMDFLAG_2ND_TIER; int useHtml = find_option("html","h",0)!=0; if( find_option("www","w",0) ){ mask = CMDFLAG_WEBPAGE; } if( find_option("everything","e",0) ){ mask = CMDFLAG_1ST_TIER | CMDFLAG_2ND_TIER | CMDFLAG_WEBPAGE; } if( find_option("test","t",0) ){ mask |= CMDFLAG_TEST; } if( useHtml ) fossil_print("<!--\n"); fossil_print("Help text for:\n"); if( mask & CMDFLAG_1ST_TIER ) fossil_print(" * Commands\n"); if( mask & CMDFLAG_2ND_TIER ) fossil_print(" * Auxiliary commands\n"); if( mask & CMDFLAG_TEST ) fossil_print(" * Test commands\n"); if( mask & CMDFLAG_WEBPAGE ) fossil_print(" * Web pages\n"); if( useHtml ){ fossil_print("-->\n"); fossil_print("<!-- start_all_help -->\n"); }else{ fossil_print("---\n"); } for(i=0; i<MX_COMMAND; i++){ if( (aCommand[i].eCmdFlags & mask)==0 ) continue; fossil_print("# %s\n", aCommand[i].zName); if( useHtml ){ Blob html; blob_zero(&html); help_to_html(aCommand[i].zHelp, &html); fossil_print("%s\n\n", blob_str(&html)); blob_reset(&html); }else{ fossil_print("%s\n\n", aCommand[i].zHelp); } } if( useHtml ){ fossil_print("<!-- end_all_help -->\n"); }else{ fossil_print("---\n"); } version_cmd(); } /* ** WEBPAGE: help ** URL: /help?name=CMD ** ** Show the built-in help text for CMD. CMD can be a command-line interface ** command or a page name from the web interface. */ void help_page(void){ const char *zCmd = P("cmd"); if( zCmd==0 ) zCmd = P("name"); if( zCmd && *zCmd ){ int rc; const CmdOrPage *pCmd = 0; style_header("Help: %s", zCmd); style_submenu_element("Command-List", "%s/help", g.zTop); if( *zCmd=='/' ){ /* Some of the webpages require query parameters in order to work. ** @ <h1>The "<a href='%R%s(zCmd)'>%s(zCmd)</a>" page:</h1> */ @ <h1>The "%s(zCmd)" page:</h1> }else{ @ <h1>The "%s(zCmd)" command:</h1> } rc = dispatch_name_search(zCmd, CMDFLAG_ANY, &pCmd); if( rc==1 ){ @ unknown command: %s(zCmd) }else if( rc==2 ){ @ ambiguous command prefix: %s(zCmd) }else{ if( pCmd->zHelp[0]==0 ){ @ no help available for the %s(pCmd->zName) command }else{ @ <blockquote> help_to_html(pCmd->zHelp, cgi_output_blob()); @ </blockquote> } } }else{ int i, j, n; style_header("Help"); @ <h1>Available commands:</h1> @ <table border="0"><tr> for(i=j=0; i<MX_COMMAND; i++){ const char *z = aCommand[i].zName; if( '/'==*z || strncmp(z,"test",4)==0 ) continue; j++; } n = (j+5)/6; for(i=j=0; i<MX_COMMAND; i++){ const char *z = aCommand[i].zName; const char *zBoldOn = aCommand[i].eCmdFlags&CMDFLAG_1ST_TIER?"<b>" :""; const char *zBoldOff = aCommand[i].eCmdFlags&CMDFLAG_1ST_TIER?"</b>":""; if( '/'==*z || strncmp(z,"test",4)==0 ) continue; if( j==0 ){ @ <td valign="top"><ul> } @ <li><a href="%R/help?cmd=%s(z)">%s(zBoldOn)%s(z)%s(zBoldOff)</a></li> j++; if( j>=n ){ @ </ul></td> j = 0; } } if( j>0 ){ @ </ul></td> } @ </tr></table> @ <h1>Available web UI pages:</h1> @ <table border="0"><tr> for(i=j=0; i<MX_COMMAND; i++){ const char *z = aCommand[i].zName; if( '/'!=*z ) continue; j++; } n = (j+4)/5; for(i=j=0; i<MX_COMMAND; i++){ const char *z = aCommand[i].zName; if( '/'!=*z ) continue; if( j==0 ){ @ <td valign="top"><ul> } if( aCommand[i].zHelp[0] ){ @ <li><a href="%R/help?cmd=%s(z)">%s(z+1)</a></li> }else{ @ <li>%s(z+1)</li> } j++; if( j>=n ){ @ </ul></td> j = 0; } } if( j>0 ){ @ </ul></td> } @ </tr></table> @ <h1>Unsupported commands:</h1> @ <table border="0"><tr> for(i=j=0; i<MX_COMMAND; i++){ const char *z = aCommand[i].zName; if( strncmp(z,"test",4)!=0 ) continue; j++; } n = (j+3)/4; for(i=j=0; i<MX_COMMAND; i++){ const char *z = aCommand[i].zName; if( strncmp(z,"test",4)!=0 ) continue; if( j==0 ){ @ <td valign="top"><ul> } if( aCommand[i].zHelp[0] ){ @ <li><a href="%R/help?cmd=%s(z)">%s(z)</a></li> }else{ @ <li>%s(z)</li> } j++; if( j>=n ){ @ </ul></td> j = 0; } } if( j>0 ){ @ </ul></td> } @ </tr></table> } style_footer(); } /* ** WEBPAGE: test-all-help ** ** Show all help text on a single page. Useful for proof-reading. */ void test_all_help_page(void){ int i; style_header("All Help Text"); for(i=0; i<MX_COMMAND; i++){ if( memcmp(aCommand[i].zName, "test", 4)==0 ) continue; @ <h2>%s(aCommand[i].zName):</h2> @ <blockquote> help_to_html(aCommand[i].zHelp, cgi_output_blob()); @ </blockquote> } style_footer(); } static void multi_column_list(const char **azWord, int nWord){ int i, j, len; int mxLen = 0; int nCol; int nRow; for(i=0; i<nWord; i++){ len = strlen(azWord[i]); if( len>mxLen ) mxLen = len; } nCol = 80/(mxLen+2); if( nCol==0 ) nCol = 1; nRow = (nWord + nCol - 1)/nCol; for(i=0; i<nRow; i++){ const char *zSpacer = ""; for(j=i; j<nWord; j+=nRow){ fossil_print("%s%-*s", zSpacer, mxLen, azWord[j]); zSpacer = " "; } fossil_print("\n"); } } /* ** COMMAND: test-list-webpage ** ** List all web pages. */ void cmd_test_webpage_list(void){ int i, nCmd; const char *aCmd[MX_COMMAND]; for(i=nCmd=0; i<MX_COMMAND; i++){ if(CMDFLAG_WEBPAGE & aCommand[i].eCmdFlags){ aCmd[nCmd++] = aCommand[i].zName; } } assert(nCmd && "page list is empty?"); multi_column_list(aCmd, nCmd); } /* ** List of commands starting with zPrefix, or all commands if zPrefix is NULL. */ static void command_list(const char *zPrefix, int cmdMask){ int i, nCmd; int nPrefix = zPrefix ? strlen(zPrefix) : 0; const char *aCmd[MX_COMMAND]; for(i=nCmd=0; i<MX_COMMAND; i++){ const char *z = aCommand[i].zName; if( (aCommand[i].eCmdFlags & cmdMask)==0 ) continue; if( zPrefix && memcmp(zPrefix, z, nPrefix)!=0 ) continue; aCmd[nCmd++] = aCommand[i].zName; } multi_column_list(aCmd, nCmd); } /* ** COMMAND: help ** ** Usage: %fossil help COMMAND ** or: %fossil COMMAND --help ** ** Display information on how to use COMMAND. To display a list of ** available commands use one of: ** ** %fossil help Show common commands ** %fossil help -a|--all Show both common and auxiliary commands ** %fossil help -t|--test Show test commands only ** %fossil help -x|--aux Show auxiliary commands only ** %fossil help -w|--www Show list of WWW pages */ void help_cmd(void){ int rc; int isPage = 0; const char *z; const char *zCmdOrPage; const char *zCmdOrPagePlural; const CmdOrPage *pCmd = 0; if( g.argc<3 ){ z = g.argv[0]; fossil_print( "Usage: %s help COMMAND\n" "Common COMMANDs: (use \"%s help -a|--all\" for a complete list)\n", z, z); command_list(0, CMDFLAG_1ST_TIER); version_cmd(); return; } if( find_option("all","a",0) ){ command_list(0, CMDFLAG_1ST_TIER | CMDFLAG_2ND_TIER); return; } else if( find_option("www","w",0) ){ command_list(0, CMDFLAG_WEBPAGE); return; } else if( find_option("aux","x",0) ){ command_list(0, CMDFLAG_2ND_TIER); return; } else if( find_option("test","t",0) ){ command_list(0, CMDFLAG_TEST); return; } isPage = ('/' == *g.argv[2]) ? 1 : 0; if(isPage){ zCmdOrPage = "page"; zCmdOrPagePlural = "pages"; }else{ zCmdOrPage = "command"; zCmdOrPagePlural = "commands"; } rc = dispatch_name_search(g.argv[2], CMDFLAG_ANY|CMDFLAG_PREFIX, &pCmd); if( rc==1 ){ fossil_print("unknown %s: %s\nAvailable %s:\n", zCmdOrPage, g.argv[2], zCmdOrPagePlural); command_list(0, isPage ? CMDFLAG_WEBPAGE : (0xff & ~CMDFLAG_WEBPAGE)); fossil_exit(1); }else if( rc==2 ){ fossil_print("ambiguous %s prefix: %s\nMatching %s:\n", zCmdOrPage, g.argv[2], zCmdOrPagePlural); command_list(g.argv[2], 0xff); fossil_exit(1); } z = pCmd->zHelp; if( z==0 ){ fossil_fatal("no help available for the %s %s", pCmd->zName, zCmdOrPage); } while( *z ){ if( *z=='%' && strncmp(z, "%fossil", 7)==0 ){ fossil_print("%s", g.argv[0]); z += 7; }else{ putchar(*z); z++; } } putchar('\n'); } |
Changes to src/doc.c.
︙ | ︙ | |||
52 53 54 55 56 57 58 | }; if( !looks_like_binary(pBlob) ) { return 0; /* Plain text */ } x = (const unsigned char*)blob_buffer(pBlob); n = blob_size(pBlob); | | | 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 | }; if( !looks_like_binary(pBlob) ) { return 0; /* Plain text */ } x = (const unsigned char*)blob_buffer(pBlob); n = blob_size(pBlob); for(i=0; i<count(aMime); i++){ if( n>=aMime[i].size && memcmp(x, aMime[i].zPrefix, aMime[i].size)==0 ){ return aMime[i].zMimetype; } } return "unknown/unknown"; } |
︙ | ︙ | |||
82 83 84 85 86 87 88 89 90 91 92 93 94 95 | { "asf", 3, "video/x-ms-asf" }, { "asx", 3, "video/x-ms-asx" }, { "au", 2, "audio/ulaw" }, { "avi", 3, "video/x-msvideo" }, { "bat", 3, "application/x-msdos-program" }, { "bcpio", 5, "application/x-bcpio" }, { "bin", 3, "application/octet-stream" }, { "c", 1, "text/plain" }, { "cc", 2, "text/plain" }, { "ccad", 4, "application/clariscad" }, { "cdf", 3, "application/x-netcdf" }, { "class", 5, "application/octet-stream" }, { "cod", 3, "application/vnd.rim.cod" }, { "com", 3, "application/x-msdos-program" }, | > > | 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 | { "asf", 3, "video/x-ms-asf" }, { "asx", 3, "video/x-ms-asx" }, { "au", 2, "audio/ulaw" }, { "avi", 3, "video/x-msvideo" }, { "bat", 3, "application/x-msdos-program" }, { "bcpio", 5, "application/x-bcpio" }, { "bin", 3, "application/octet-stream" }, { "bz2", 3, "application/x-bzip2" }, { "bzip", 4, "application/x-bzip" }, { "c", 1, "text/plain" }, { "cc", 2, "text/plain" }, { "ccad", 4, "application/clariscad" }, { "cdf", 3, "application/x-netcdf" }, { "class", 5, "application/octet-stream" }, { "cod", 3, "application/vnd.rim.cod" }, { "com", 3, "application/x-msdos-program" }, |
︙ | ︙ | |||
289 290 291 292 293 294 295 | /* ** Verify that all entries in the aMime[] table are in sorted order. ** Abort with a fatal error if any is out-of-order. */ static void mimetype_verify(void){ int i; | | | 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 | /* ** Verify that all entries in the aMime[] table are in sorted order. ** Abort with a fatal error if any is out-of-order. */ static void mimetype_verify(void){ int i; for(i=1; i<count(aMime); i++){ if( fossil_strcmp(aMime[i-1].zSuffix,aMime[i].zSuffix)>=0 ){ fossil_fatal("mimetypes out of sequence: %s before %s", aMime[i-1].zSuffix, aMime[i].zSuffix); } } } |
︙ | ︙ | |||
327 328 329 330 331 332 333 | if( zName[i]=='.' ) z = &zName[i+1]; } len = strlen(z); if( len<sizeof(zSuffix)-1 ){ sqlite3_snprintf(sizeof(zSuffix), zSuffix, "%s", z); for(i=0; zSuffix[i]; i++) zSuffix[i] = fossil_tolower(zSuffix[i]); first = 0; | | | 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 | if( zName[i]=='.' ) z = &zName[i+1]; } len = strlen(z); if( len<sizeof(zSuffix)-1 ){ sqlite3_snprintf(sizeof(zSuffix), zSuffix, "%s", z); for(i=0; zSuffix[i]; i++) zSuffix[i] = fossil_tolower(zSuffix[i]); first = 0; last = count(aMime) - 1; while( first<=last ){ int c; i = (first+last)/2; c = fossil_strcmp(zSuffix, aMime[i].zSuffix); if( c==0 ) return aMime[i].zMimetype; if( c<0 ){ last = i-1; |
︙ | ︙ | |||
380 381 382 383 384 385 386 | @ suffixes and the following table to guess at the appropriate mimetype @ for each document.</p> @ <table id='mimeTable' border=1 cellpadding=0 class='mimetypetable'> @ <thead> @ <tr><th>Suffix<th>Mimetype @ </thead> @ <tbody> | | | 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 | @ suffixes and the following table to guess at the appropriate mimetype @ for each document.</p> @ <table id='mimeTable' border=1 cellpadding=0 class='mimetypetable'> @ <thead> @ <tr><th>Suffix<th>Mimetype @ </thead> @ <tbody> for(i=0; i<count(aMime); i++){ @ <tr><td>%h(aMime[i].zSuffix)<td>%h(aMime[i].zMimetype)</tr> } @ </tbody></table> output_table_sorting_javascript("mimeTable","tt",1); style_footer(); } |
︙ | ︙ | |||
502 503 504 505 506 507 508 | ** and any case for href. */ static void convert_href_and_output(Blob *pIn){ int i, base; int n = blob_size(pIn); char *z = blob_buffer(pIn); for(base=0, i=7; i<n; i++){ | | > | | 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 | ** and any case for href. */ static void convert_href_and_output(Blob *pIn){ int i, base; int n = blob_size(pIn); char *z = blob_buffer(pIn); for(base=0, i=7; i<n; i++){ if( z[i]=='$' && strncmp(&z[i],"$ROOT/", 6)==0 && (z[i-1]=='\'' || z[i-1]=='"') && i-base>=9 && (fossil_strnicmp(&z[i-7]," href=", 6)==0 || fossil_strnicmp(&z[i-9]," action=", 8)==0) ){ blob_append(cgi_output_blob(), &z[base], i-base); blob_appendf(cgi_output_blob(), "%R"); base = i+5; } } blob_append(cgi_output_blob(), &z[base], i-base); } /* ** WEBPAGE: uv ** WEBPAGE: doc ** URL: /uv/FILE ** URL: /doc/CHECKIN/FILE ** ** CHECKIN can be either tag or SHA1 hash or timestamp identifying a ** particular check, or the name of a branch (meaning the most recent ** check-in on that branch) or one of various magic words: ** ** "tip" means the most recent check-in |
︙ | ︙ | |||
574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 | char *zCheckin = "tip"; /* The check-in holding the document */ int vid = 0; /* Artifact of check-in */ int rid = 0; /* Artifact of file */ int i; /* Loop counter */ Blob filebody; /* Content of the documentation file */ Blob title; /* Document title */ int nMiss = (-1); /* Failed attempts to find the document */ static const char *const azSuffix[] = { "index.html", "index.wiki", "index.md" #ifdef FOSSIL_ENABLE_TH1_DOCS , "index.th1" #endif }; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } blob_init(&title, 0, 0); db_begin_transaction(); | > > > | > > > > | | | | | | > | | | > > > | > | > > > > > | | 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 | char *zCheckin = "tip"; /* The check-in holding the document */ int vid = 0; /* Artifact of check-in */ int rid = 0; /* Artifact of file */ int i; /* Loop counter */ Blob filebody; /* Content of the documentation file */ Blob title; /* Document title */ int nMiss = (-1); /* Failed attempts to find the document */ int isUV = g.zPath[0]=='u'; /* True for /uv. False for /doc */ const char *zDfltTitle; static const char *const azSuffix[] = { "index.html", "index.wiki", "index.md" #ifdef FOSSIL_ENABLE_TH1_DOCS , "index.th1" #endif }; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } blob_init(&title, 0, 0); zDfltTitle = isUV ? "" : "Documentation"; db_begin_transaction(); while( rid==0 && (++nMiss)<=count(azSuffix) ){ zName = P("name"); if( isUV ){ if( zName==0 ) zName = "index.wiki"; i = 0; }else{ if( zName==0 || zName[0]==0 ) zName = "tip/index.wiki"; for(i=0; zName[i] && zName[i]!='/'; i++){} zCheckin = mprintf("%.*s", i, zName); if( fossil_strcmp(zCheckin,"ckout")==0 && g.localOpen==0 ){ zCheckin = "tip"; } } if( nMiss==count(azSuffix) ){ zName = "404.md"; }else if( zName[i]==0 ){ assert( nMiss>=0 && nMiss<count(azSuffix) ); zName = azSuffix[nMiss]; }else if( !isUV ){ zName += i; } while( zName[0]=='/' ){ zName++; } if( isUV ){ g.zPath = mprintf("%s/%s", g.zPath, zName); }else{ g.zPath = mprintf("%s/%s/%s", g.zPath, zCheckin, zName); } if( nMiss==0 ) zOrigName = zName; if( !file_is_simple_pathname(zName, 1) ){ if( sqlite3_strglob("*/", zName)==0 ){ assert( nMiss>=0 && nMiss<count(azSuffix) ); zName = mprintf("%s%s", zName, azSuffix[nMiss]); if( !file_is_simple_pathname(zName, 1) ){ goto doc_not_found; } }else{ goto doc_not_found; } } if( isUV ){ if( unversioned_content(zName, &filebody)==0 ){ rid = 1; zDfltTitle = zName; } }else if( fossil_strcmp(zCheckin,"ckout")==0 ){ /* Read from the local checkout */ char *zFullpath; db_must_be_within_tree(); zFullpath = mprintf("%s/%s", g.zLocalRoot, zName); if( file_isfile(zFullpath) && blob_read_from_file(&filebody, zFullpath)>0 ){ rid = 1; /* Fake RID just to get the loop to end */ |
︙ | ︙ | |||
641 642 643 644 645 646 647 | ** file to the user */ zMime = nMiss==0 ? P("mimetype") : 0; if( zMime==0 ){ zMime = mimetype_from_name(zName); } Th_Store("doc_name", zName); | > | | | | > | | | | > > | > > | > | > > > > | 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 | ** file to the user */ zMime = nMiss==0 ? P("mimetype") : 0; if( zMime==0 ){ zMime = mimetype_from_name(zName); } Th_Store("doc_name", zName); if( vid ){ Th_Store("doc_version", db_text(0, "SELECT '[' || substr(uuid,1,10) || ']'" " FROM blob WHERE rid=%d", vid)); Th_Store("doc_date", db_text(0, "SELECT datetime(mtime) FROM event" " WHERE objid=%d AND type='ci'", vid)); } if( fossil_strcmp(zMime, "text/x-fossil-wiki")==0 ){ Blob tail; style_adunit_config(ADUNIT_RIGHT_OK); if( wiki_find_title(&filebody, &title, &tail) ){ style_header("%s", blob_str(&title)); wiki_convert(&tail, 0, WIKI_BUTTONS); }else{ style_header("%s", zDfltTitle); wiki_convert(&filebody, 0, WIKI_BUTTONS); } style_footer(); }else if( fossil_strcmp(zMime, "text/x-markdown")==0 ){ Blob tail = BLOB_INITIALIZER; markdown_to_html(&filebody, &title, &tail); if( blob_size(&title)>0 ){ style_header("%s", blob_str(&title)); }else{ style_header("%s", nMiss>=count(azSuffix)? "Not Found" : zDfltTitle); } convert_href_and_output(&tail); style_footer(); }else if( fossil_strcmp(zMime, "text/plain")==0 ){ style_header("%s", zDfltTitle); @ <blockquote><pre> @ %h(blob_str(&filebody)) @ </pre></blockquote> style_footer(); }else if( fossil_strcmp(zMime, "text/html")==0 && doc_is_embedded_html(&filebody, &title) ){ if( blob_size(&title)==0 ) blob_append(&title,zName,-1); style_header("%s", blob_str(&title)); convert_href_and_output(&filebody); style_footer(); #ifdef FOSSIL_ENABLE_TH1_DOCS }else if( Th_AreDocsEnabled() && fossil_strcmp(zMime, "application/x-th1")==0 ){ int raw = P("raw")!=0; if( !raw ){ style_header("%h", zName); } Th_Render(blob_str(&filebody)); if( !raw ){ style_footer(); } #endif }else{ cgi_set_content_type(zMime); cgi_set_content(&filebody); } if( nMiss>=count(azSuffix) ) cgi_set_status(404, "Not Found"); db_end_transaction(0); return; /* Jump here when unable to locate the document */ doc_not_found: db_end_transaction(0); if( isUV && P("name")==0 ){ uvstat_page(); return; } cgi_set_status(404, "Not Found"); style_header("Not Found"); @ <p>Document %h(zOrigName) not found if( fossil_strcmp(zCheckin,"ckout")!=0 ){ @ in %z(href("%R/tree?ci=%T",zCheckin))%h(zCheckin)</a> } style_footer(); |
︙ | ︙ |
Changes to src/encode.c.
︙ | ︙ | |||
21 22 23 24 25 26 27 | #include "encode.h" /* ** Make the given string safe for HTML by converting every "<" into "<", ** every ">" into ">" and every "&" into "&". Return a pointer ** to a new string obtained from malloc(). ** | | > | 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 | #include "encode.h" /* ** Make the given string safe for HTML by converting every "<" into "<", ** every ">" into ">" and every "&" into "&". Return a pointer ** to a new string obtained from malloc(). ** ** We also encode " as " and ' as ' so they can appear as an argument ** to markup. */ char *htmlize(const char *zIn, int n){ int c; int i = 0; int count = 0; char *zOut; if( n<0 ) n = strlen(zIn); while( i<n && (c = zIn[i])!=0 ){ switch( c ){ case '<': count += 4; break; case '>': count += 4; break; case '&': count += 5; break; case '"': count += 6; break; case '\'': count += 5; break; default: count++; break; } i++; } i = 0; zOut = fossil_malloc( count+1 ); while( n-->0 && (c = *zIn)!=0 ){ |
︙ | ︙ | |||
72 73 74 75 76 77 78 79 80 81 82 83 84 85 | zOut[i++] = '&'; zOut[i++] = 'q'; zOut[i++] = 'u'; zOut[i++] = 'o'; zOut[i++] = 't'; zOut[i++] = ';'; break; default: zOut[i++] = c; break; } zIn++; } zOut[i] = 0; | > > > > > > > | 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 | zOut[i++] = '&'; zOut[i++] = 'q'; zOut[i++] = 'u'; zOut[i++] = 'o'; zOut[i++] = 't'; zOut[i++] = ';'; break; case '\'': zOut[i++] = '&'; zOut[i++] = '#'; zOut[i++] = '3'; zOut[i++] = '9'; zOut[i++] = ';'; break; default: zOut[i++] = c; break; } zIn++; } zOut[i] = 0; |
︙ | ︙ | |||
110 111 112 113 114 115 116 117 118 119 120 121 122 123 | blob_append(p, "&", 5); j = i+1; break; case '"': if( j<i ) blob_append(p, zIn+j, i-j); blob_append(p, """, 6); j = i+1; break; } } if( j<i ) blob_append(p, zIn+j, i-j); } | > > > > > | 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 | blob_append(p, "&", 5); j = i+1; break; case '"': if( j<i ) blob_append(p, zIn+j, i-j); blob_append(p, """, 6); j = i+1; break; case '\'': if( j<i ) blob_append(p, zIn+j, i-j); blob_append(p, "'", 5); j = i+1; break; } } if( j<i ) blob_append(p, zIn+j, i-j); } |
︙ | ︙ | |||
365 366 367 368 369 370 371 | } z64[n] = 0; return z64; } /* ** COMMAND: test-encode64 | | | 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 | } z64[n] = 0; return z64; } /* ** COMMAND: test-encode64 ** ** Usage: %fossil test-encode64 STRING */ void test_encode64_cmd(void){ char *z; int i; for(i=2; i<g.argc; i++){ z = encode64(g.argv[i], -1); |
︙ | ︙ | |||
431 432 433 434 435 436 437 | zData[j] = 0; *pnByte = j; return zData; } /* ** COMMAND: test-decode64 | | | 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 | zData[j] = 0; *pnByte = j; return zData; } /* ** COMMAND: test-decode64 ** ** Usage: %fossil test-decode64 STRING */ void test_decode64_cmd(void){ char *z; int i, n; for(i=2; i<g.argc; i++){ z = decode64(g.argv[i], &n); |
︙ | ︙ |
Changes to src/event.c.
︙ | ︙ | |||
57 58 59 60 61 62 63 | ** ** name=ID Identify the technical note to display. ID must be ** complete. ** aid=ARTIFACTID Which specific version of the tech-note. Optional. ** v=BOOLEAN Show details if TRUE. Default is FALSE. Optional. ** ** Display an existing tech-note identified by its ID, optionally at a | | | 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 | ** ** name=ID Identify the technical note to display. ID must be ** complete. ** aid=ARTIFACTID Which specific version of the tech-note. Optional. ** v=BOOLEAN Show details if TRUE. Default is FALSE. Optional. ** ** Display an existing tech-note identified by its ID, optionally at a ** specific version, and optionally with additional details. */ void event_page(void){ int rid = 0; /* rid of the event artifact */ char *zUuid; /* UUID corresponding to rid */ const char *zId; /* Event identifier */ const char *zVerbose; /* Value of verbose option */ char *zETime; /* Time of the tech-note */ |
︙ | ︙ | |||
150 151 152 153 154 155 156 | } }else{ blob_appendf(&title, "Tech-note %S", zId); tail = fullbody; } style_header("%s", blob_str(&title)); if( g.perm.WrWiki && g.perm.Write && nextRid==0 ){ | | | | < | | | | | | 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 | } }else{ blob_appendf(&title, "Tech-note %S", zId); tail = fullbody; } style_header("%s", blob_str(&title)); if( g.perm.WrWiki && g.perm.Write && nextRid==0 ){ style_submenu_element("Edit", "%R/technoteedit?name=%!S", zId); if( g.perm.Attach ){ style_submenu_element("Attach", "%R/attachadd?technote=%!S&from=%R/technote/%!S", zId, zId); } } zETime = db_text(0, "SELECT datetime(%.17g)", pTNote->rEventDate); style_submenu_element("Context", "%R/timeline?c=%.20s", zId); if( g.perm.Hyperlink ){ if( verboseFlag ){ style_submenu_element("Plain", "%R/technote?name=%!S&aid=%s&mimetype=text/plain", zId, zUuid); if( nextRid ){ char *zNext; zNext = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", nextRid); style_submenu_element("Next", "%R/technote?name=%!S&aid=%s&v", zId, zNext); free(zNext); } if( prevRid ){ char *zPrev; zPrev = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", prevRid); style_submenu_element("Prev", "%R/technote?name=%!S&aid=%s&v", zId, zPrev); free(zPrev); } }else{ style_submenu_element("Detail", "%R/technote?name=%!S&aid=%s&v", zId, zUuid); } } if( verboseFlag && g.perm.Hyperlink ){ int i; const char *zClr = 0; |
︙ | ︙ | |||
233 234 235 236 237 238 239 | attachment_list(zFullId, "<hr /><h2>Attachments:</h2><ul>"); style_footer(); manifest_destroy(pTNote); } /* ** Add or update a new tech note to the repository. rid is id of | | | 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 | attachment_list(zFullId, "<hr /><h2>Attachments:</h2><ul>"); style_footer(); manifest_destroy(pTNote); } /* ** Add or update a new tech note to the repository. rid is id of ** the prior version of this technote, if any. ** ** returns 1 if the tech note was added or updated, 0 if the ** update failed making an invalid artifact */ int event_commit_common( int rid, /* id of the prior version of the technote */ const char *zId, /* hash label for the technote */ |
︙ | ︙ | |||
264 265 266 267 268 269 270 | while( n>0 && fossil_isspace(zComment[n-1]) ){ n--; } if( n>0 ){ blob_appendf(&event, "C %#F\n", n, zComment); } zDate = date_in_standard_format("now"); blob_appendf(&event, "D %s\n", zDate); free(zDate); | | | 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 | while( n>0 && fossil_isspace(zComment[n-1]) ){ n--; } if( n>0 ){ blob_appendf(&event, "C %#F\n", n, zComment); } zDate = date_in_standard_format("now"); blob_appendf(&event, "D %s\n", zDate); free(zDate); zETime[10] = 'T'; blob_appendf(&event, "E %s %s\n", zETime, zId); zETime[10] = ' '; if( rid ){ char *zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); blob_appendf(&event, "P %s\n", zUuid); free(zUuid); |
︙ | ︙ |
Changes to src/export.c.
︙ | ︙ | |||
130 131 132 133 134 135 136 137 138 139 140 | ); } /* ** create_mark() ** Create a new (mark,rid,uuid) entry for the given rid in the 'xmark' table, ** and return that information as a struct mark_t in *mark. ** This function returns -1 in the case where 'rid' does not exist, otherwise ** it returns 0. ** mark->name is dynamically allocated and is owned by the caller upon return. */ | > > > > | | | > | > > > | | | | | | > | | > | > > > > > > > | | | | > > > | | < < | > | | > > > > > > > > > > > > > > > > > > > > > > > > > | | | < < < < < < < | < < | 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 | ); } /* ** create_mark() ** Create a new (mark,rid,uuid) entry for the given rid in the 'xmark' table, ** and return that information as a struct mark_t in *mark. ** *unused_mark is a value representing a mark that is free for use--that is, ** it does not appear in the marks file, and has not been used during this ** export run. Specifically, it is the supremum of the set of used marks ** plus one. ** This function returns -1 in the case where 'rid' does not exist, otherwise ** it returns 0. ** mark->name is dynamically allocated and is owned by the caller upon return. */ int create_mark(int rid, struct mark_t *mark, unsigned int *unused_mark){ char sid[13]; char *zUuid = rid_to_uuid(rid); if( !zUuid ){ fossil_trace("Undefined rid=%d\n", rid); return -1; } mark->rid = rid; sqlite3_snprintf(sizeof(sid), sid, ":%d", *unused_mark); *unused_mark += 1; mark->name = fossil_strdup(sid); sqlite3_snprintf(sizeof(mark->uuid), mark->uuid, "%s", zUuid); free(zUuid); insert_commit_xref(mark->rid, mark->name, mark->uuid); return 0; } /* ** mark_name_from_rid() ** Find the mark associated with the given rid. Mark names always start ** with ':', and are pulled from the 'xmark' temporary table. ** If the given rid doesn't have a mark associated with it yet, one is ** created with a value of *unused_mark. ** *unused_mark functions exactly as in create_mark(). ** This function returns NULL if the rid does not have an associated UUID, ** (i.e. is not valid). Otherwise, it returns the name of the mark, which is ** dynamically allocated and is owned by the caller of this function. */ char * mark_name_from_rid(int rid, unsigned int *unused_mark){ char *zMark = db_text(0, "SELECT tname FROM xmark WHERE trid=%d", rid); if( zMark==NULL ){ struct mark_t mark; if( create_mark(rid, &mark, unused_mark)==0 ){ zMark = mark.name; }else{ return NULL; } } return zMark; } /* ** parse_mark() ** Create a new (mark,rid,uuid) entry in the 'xmark' table given a line ** from a marks file. Return the cross-ref information as a struct mark_t ** in *mark. ** This function returns -1 in the case that the line is blank, malformed, or ** the rid/uuid named in 'line' does not match what is in the repository ** database. Otherwise, 0 is returned. ** mark->name is dynamically allocated, and owned by the caller. */ int parse_mark(char *line, struct mark_t *mark){ char *cur_tok; char type_; cur_tok = strtok(line, " \t"); if( !cur_tok || strlen(cur_tok)<2 ){ return -1; } mark->rid = atoi(&cur_tok[1]); type_ = cur_tok[0]; if( type_!='c' && type_!='b' ){ /* This is probably a blob mark */ mark->name = NULL; return 0; } cur_tok = strtok(NULL, " \t"); if( !cur_tok ){ /* This mark was generated by an older version of Fossil and doesn't ** include the mark name and uuid. create_mark() will name the new mark ** exactly as it was when exported to git, so that we should have a ** valid mapping from git sha1<->mark name<->fossil sha1. */ unsigned int mid; if( type_=='c' ){ mid = COMMITMARK(mark->rid); } else{ mid = BLOBMARK(mark->rid); } return create_mark(mark->rid, mark, &mid); }else{ mark->name = fossil_strdup(cur_tok); } cur_tok = strtok(NULL, "\n"); if( !cur_tok || strlen(cur_tok)!=40 ){ free(mark->name); fossil_trace("Invalid SHA-1 in marks file: %s\n", cur_tok); return -1; }else{ sqlite3_snprintf(sizeof(mark->uuid), mark->uuid, "%s", cur_tok); } /* make sure that rid corresponds to UUID */ if( fast_uuid_to_rid(mark->uuid)!=mark->rid ){ free(mark->name); fossil_trace("Non-existent SHA-1 in marks file: %s\n", mark->uuid); return -1; } /* insert a cross-ref into the 'xmark' table */ insert_commit_xref(mark->rid, mark->name, mark->uuid); return 0; } /* ** import_marks() ** Import the marks specified in file 'f' into the 'xmark' table. ** If 'blobs' is non-null, insert all blob marks into it. ** If 'vers' is non-null, insert all commit marks into it. ** If 'unused_marks' is non-null, upon return of this function, all values ** x >= *unused_marks are free to use as marks, i.e. they do not clash with ** any marks appearing in the marks file. ** Each line in the file must be at most 100 characters in length. This ** seems like a reasonable maximum for a 40-character uuid, and 1-13 ** character rid. ** The function returns -1 if any of the lines in file 'f' are malformed, ** or the rid/uuid information doesn't match what is in the repository ** database. Otherwise, 0 is returned. */ int import_marks(FILE* f, Bag *blobs, Bag *vers, unsigned int *unused_mark){ char line[101]; while(fgets(line, sizeof(line), f)){ struct mark_t mark; if( strlen(line)==100 && line[99]!='\n' ){ /* line too long */ return -1; } if( parse_mark(line, &mark)<0 ){ return -1; }else if( line[0]=='b' ){ if( blobs!=NULL ){ bag_insert(blobs, mark.rid); } }else{ if( vers!=NULL ){ bag_insert(vers, mark.rid); } } if( unused_mark!=NULL ){ unsigned int mid = atoi(mark.name + 1); if( mid>=*unused_mark ){ *unused_mark = mid + 1; } } free(mark.name); } return 0; } void export_mark(FILE* f, int rid, char obj_type) { unsigned int z = 0; char *zUuid = rid_to_uuid(rid); char *zMark; if( zUuid==NULL ){ fossil_trace("No uuid matching rid=%d when exporting marks\n", rid); return; } /* Since rid is already in the 'xmark' table, the value of z won't be ** used, but pass in a valid pointer just to be safe. */ zMark = mark_name_from_rid(rid, &z); fprintf(f, "%c%d %s %s\n", obj_type, rid, zMark, zUuid); free(zMark); free(zUuid); } /* ** If 'blobs' is non-null, it must point to a Bag of blob rids to be ** written to disk. Blob rids are written as 'b<rid>'. ** If 'vers' is non-null, it must point to a Bag of commit rids to be ** written to disk. Commit rids are written as 'c<rid> :<mark> <uuid>'. ** All commit (mark,rid,uuid) tuples are stored in 'xmark' table. ** This function does not fail, but may produce errors if a uuid cannot ** be found for an rid in 'vers'. */ void export_marks(FILE* f, Bag *blobs, Bag *vers){ int rid; if( blobs!=NULL ){ rid = bag_first(blobs); if( rid!=0 ){ do{ export_mark(f, rid, 'b'); }while( (rid = bag_next(blobs, rid))!=0 ); } } if( vers!=NULL ){ rid = bag_first(vers); if( rid!=0 ){ do{ export_mark(f, rid, 'c'); }while( (rid = bag_next(vers, rid))!=0 ); } } } /* ** COMMAND: export |
︙ | ︙ | |||
334 335 336 337 338 339 340 341 342 343 344 345 346 347 | ** ** See also: import */ void export_cmd(void){ Stmt q, q2, q3; int i; Bag blobs, vers; const char *markfile_in; const char *markfile_out; bag_init(&blobs); bag_init(&vers); find_option("git", 0, 0); /* Ignore the --git option for now */ | > | 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 | ** ** See also: import */ void export_cmd(void){ Stmt q, q2, q3; int i; Bag blobs, vers; unsigned int unused_mark = 1; const char *markfile_in; const char *markfile_out; bag_init(&blobs); bag_init(&vers); find_option("git", 0, 0); /* Ignore the --git option for now */ |
︙ | ︙ | |||
360 361 362 363 364 365 366 | FILE *f; int rid; f = fossil_fopen(markfile_in, "r"); if( f==0 ){ fossil_fatal("cannot open %s for reading", markfile_in); } | | | | | | 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 | FILE *f; int rid; f = fossil_fopen(markfile_in, "r"); if( f==0 ){ fossil_fatal("cannot open %s for reading", markfile_in); } if( import_marks(f, &blobs, &vers, &unused_mark)<0 ){ fossil_fatal("error importing marks from file: %s", markfile_in); } db_prepare(&qb, "INSERT OR IGNORE INTO oldblob VALUES (:rid)"); db_prepare(&qc, "INSERT OR IGNORE INTO oldcommit VALUES (:rid)"); rid = bag_first(&blobs); if( rid!=0 ){ do{ db_bind_int(&qb, ":rid", rid); db_step(&qb); db_reset(&qb); }while((rid = bag_next(&blobs, rid))!=0); } rid = bag_first(&vers); if( rid!=0 ){ do{ db_bind_int(&qc, ":rid", rid); db_step(&qc); db_reset(&qc); }while((rid = bag_next(&vers, rid))!=0); } db_finalize(&qb); |
︙ | ︙ | |||
414 415 416 417 418 419 420 421 422 423 424 | db_prepare(&q2, "INSERT INTO oldblob VALUES (:rid)"); db_prepare(&q3, "SELECT rid FROM newblob WHERE srcid= (:srcid)"); while( db_step(&q)==SQLITE_ROW ){ int rid = db_column_int(&q, 0); Blob content; while( !bag_find(&blobs, rid) ){ content_get(rid, &content); db_bind_int(&q2, ":rid", rid); db_step(&q2); db_reset(&q2); | > > | > | 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 | db_prepare(&q2, "INSERT INTO oldblob VALUES (:rid)"); db_prepare(&q3, "SELECT rid FROM newblob WHERE srcid= (:srcid)"); while( db_step(&q)==SQLITE_ROW ){ int rid = db_column_int(&q, 0); Blob content; while( !bag_find(&blobs, rid) ){ char *zMark; content_get(rid, &content); db_bind_int(&q2, ":rid", rid); db_step(&q2); db_reset(&q2); zMark = mark_name_from_rid(rid, &unused_mark); printf("blob\nmark %s\ndata %d\n", zMark, blob_size(&content)); free(zMark); bag_insert(&blobs, rid); fwrite(blob_buffer(&content), 1, blob_size(&content), stdout); printf("\n"); blob_reset(&content); db_bind_int(&q3, ":srcid", rid); if( db_step(&q3) != SQLITE_ROW ){ |
︙ | ︙ | |||
468 469 470 471 472 473 474 | db_step(&q2); db_reset(&q2); if( zBranch==0 ) zBranch = "trunk"; zBr = mprintf("%s", zBranch); for(i=0; zBr[i]; i++){ if( !fossil_isalnum(zBr[i]) ) zBr[i] = '_'; } | | | | | | > | > | 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 | db_step(&q2); db_reset(&q2); if( zBranch==0 ) zBranch = "trunk"; zBr = mprintf("%s", zBranch); for(i=0; zBr[i]; i++){ if( !fossil_isalnum(zBr[i]) ) zBr[i] = '_'; } zMark = mark_name_from_rid(ckinId, &unused_mark); printf("commit refs/heads/%s\nmark %s\n", zBr, zMark); free(zMark); free(zBr); printf("committer"); print_person(zUser); printf(" %s +0000\n", zSecondsSince1970); if( zComment==0 ) zComment = "null comment"; printf("data %d\n%s\n", (int)strlen(zComment), zComment); db_prepare(&q3, "SELECT pid FROM plink" " WHERE cid=%d AND isprim" " AND pid IN (SELECT objid FROM event)", ckinId ); if( db_step(&q3) == SQLITE_ROW ){ int pid = db_column_int(&q3, 0); zMark = mark_name_from_rid(pid, &unused_mark); printf("from %s\n", zMark); free(zMark); db_prepare(&q4, "SELECT pid FROM plink" " WHERE cid=%d AND NOT isprim" " AND NOT EXISTS(SELECT 1 FROM phantom WHERE rid=pid)" " ORDER BY pid", ckinId); while( db_step(&q4)==SQLITE_ROW ){ zMark = mark_name_from_rid(db_column_int(&q4, 0), &unused_mark); printf("merge %s\n", zMark); free(zMark); } db_finalize(&q4); }else{ printf("deleteall\n"); } db_prepare(&q4, "SELECT filename.name, mlink.fid, mlink.mperm FROM mlink" " JOIN filename ON filename.fnid=mlink.fnid" " WHERE mlink.mid=%d", ckinId ); while( db_step(&q4)==SQLITE_ROW ){ const char *zName = db_column_text(&q4,0); int zNew = db_column_int(&q4,1); int mPerm = db_column_int(&q4,2); if( zNew==0 ){ printf("D %s\n", zName); }else if( bag_find(&blobs, zNew) ){ const char *zPerm; zMark = mark_name_from_rid(zNew, &unused_mark); switch( mPerm ){ case PERM_LNK: zPerm = "120000"; break; case PERM_EXE: zPerm = "100755"; break; default: zPerm = "100644"; break; } printf("M %s %s %s\n", zPerm, zMark, zName); free(zMark); } } db_finalize(&q4); db_finalize(&q3); printf("\n"); } db_finalize(&q2); |
︙ | ︙ | |||
545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 | " FROM tagxref JOIN tag USING(tagid)" " WHERE tagtype=1 AND tagname GLOB 'sym-*'" ); while( db_step(&q)==SQLITE_ROW ){ const char *zTagname = db_column_text(&q, 0); char *zEncoded = 0; int rid = db_column_int(&q, 1); const char *zSecSince1970 = db_column_text(&q, 2); int i; if( rid==0 || !bag_find(&vers, rid) ) continue; zTagname += 4; zEncoded = mprintf("%s", zTagname); for(i=0; zEncoded[i]; i++){ if( !fossil_isalnum(zEncoded[i]) ) zEncoded[i] = '_'; } printf("tag %s\n", zEncoded); | > | > | | 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 | " FROM tagxref JOIN tag USING(tagid)" " WHERE tagtype=1 AND tagname GLOB 'sym-*'" ); while( db_step(&q)==SQLITE_ROW ){ const char *zTagname = db_column_text(&q, 0); char *zEncoded = 0; int rid = db_column_int(&q, 1); char *zMark = mark_name_from_rid(rid, &unused_mark); const char *zSecSince1970 = db_column_text(&q, 2); int i; if( rid==0 || !bag_find(&vers, rid) ) continue; zTagname += 4; zEncoded = mprintf("%s", zTagname); for(i=0; zEncoded[i]; i++){ if( !fossil_isalnum(zEncoded[i]) ) zEncoded[i] = '_'; } printf("tag %s\n", zEncoded); printf("from %s\n", zMark); free(zMark); printf("tagger <tagger> %s +0000\n", zSecSince1970); printf("data 0\n"); fossil_free(zEncoded); } db_finalize(&q); if( markfile_out!=0 ){ FILE *f; f = fossil_fopen(markfile_out, "w"); if( f == 0 ){ fossil_fatal("cannot open %s for writing", markfile_out); } export_marks(f, &blobs, &vers); if( ferror(f)!=0 || fclose(f)!=0 ){ fossil_fatal("error while writing %s", markfile_out); } } bag_clear(&blobs); bag_clear(&vers); } |
Changes to src/file.c.
︙ | ︙ | |||
291 292 293 294 295 296 297 | }else{ rc = getStat(0, 0); } return rc ? 0 : (S_ISDIR(fileStat.st_mode) ? 1 : 2); } /* | | > > > > < | | | > > > > > > | > > | > | | 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 | }else{ rc = getStat(0, 0); } return rc ? 0 : (S_ISDIR(fileStat.st_mode) ? 1 : 2); } /* ** Same as file_isdir(), but takes into account symlinks. Return 1 if ** zFilename is a directory -OR- a symlink that points to a directory. ** Return 0 if zFilename does not exist. Return 2 if zFilename exists ** but is something other than a directory. */ int file_wd_isdir(const char *zFilename){ int rc; char *zFN; zFN = mprintf("%s", zFilename); file_simplify_name(zFN, -1, 0); rc = getStat(zFN, 1); if( rc ){ rc = 0; /* It does not exist at all. */ }else if( S_ISDIR(fileStat.st_mode) ){ rc = 1; /* It exists and is a real directory. */ }else if( S_ISLNK(fileStat.st_mode) ){ Blob content; blob_read_link(&content, zFN); /* It exists and is a link. */ rc = file_wd_isdir(blob_str(&content)); /* Points to directory? */ blob_reset(&content); }else{ rc = 2; /* It exists and is something else. */ } free(zFN); return rc; } /* ** Wrapper around the access() system call. */ int file_access(const char *zFilename, int flags){ |
︙ | ︙ | |||
469 470 471 472 473 474 475 | int file_wd_setexe(const char *zFilename, int onoff){ int rc = 0; #if !defined(_WIN32) struct stat buf; if( fossil_stat(zFilename, &buf, 1)!=0 || S_ISLNK(buf.st_mode) ) return 0; if( onoff ){ int targetMode = (buf.st_mode & 0444)>>2; | | | | 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 | int file_wd_setexe(const char *zFilename, int onoff){ int rc = 0; #if !defined(_WIN32) struct stat buf; if( fossil_stat(zFilename, &buf, 1)!=0 || S_ISLNK(buf.st_mode) ) return 0; if( onoff ){ int targetMode = (buf.st_mode & 0444)>>2; if( (buf.st_mode & 0100)==0 ){ chmod(zFilename, buf.st_mode | targetMode); rc = 1; } }else{ if( (buf.st_mode & 0100)!=0 ){ chmod(zFilename, buf.st_mode & ~0111); rc = 1; } } #endif /* _WIN32 */ return rc; } |
︙ | ︙ | |||
519 520 521 522 523 524 525 | void test_set_mtime(void){ const char *zFile; char *zDate; i64 iMTime; if( g.argc!=4 ){ usage("FILENAME DATE/TIME"); } | | | 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 | void test_set_mtime(void){ const char *zFile; char *zDate; i64 iMTime; if( g.argc!=4 ){ usage("FILENAME DATE/TIME"); } db_open_or_attach(":memory:", "mem"); iMTime = db_int64(0, "SELECT strftime('%%s',%Q)", g.argv[3]); zFile = g.argv[2]; file_set_mtime(zFile, iMTime); iMTime = file_wd_mtime(zFile); zDate = db_text(0, "SELECT datetime(%lld, 'unixepoch')", iMTime); fossil_print("Set mtime of \"%s\" to %s (%lld)\n", zFile, zDate, iMTime); } |
︙ | ︙ | |||
598 599 600 601 602 603 604 | /* ** On Windows, local path looks like: C:/develop/project/file.txt ** The if stops us from trying to create a directory of a drive letter ** C: in this example. */ if( !(i==2 && zName[1]==':') ){ #endif | | | 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 | /* ** On Windows, local path looks like: C:/develop/project/file.txt ** The if stops us from trying to create a directory of a drive letter ** C: in this example. */ if( !(i==2 && zName[1]==':') ){ #endif if( file_mkdir(zName, forceFlag) && file_wd_isdir(zName)!=1 ){ if (errorReturn <= 0) { fossil_fatal_recursive("unable to create directory %s", zName); } rc = errorReturn; break; } #if defined(_WIN32) || defined(__CYGWIN__) |
︙ | ︙ | |||
849 850 851 852 853 854 855 | */ void file_getcwd(char *zBuf, int nBuf){ #ifdef _WIN32 win32_getcwd(zBuf, nBuf); #else if( getcwd(zBuf, nBuf-1)==0 ){ if( errno==ERANGE ){ | | | 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 | */ void file_getcwd(char *zBuf, int nBuf){ #ifdef _WIN32 win32_getcwd(zBuf, nBuf); #else if( getcwd(zBuf, nBuf-1)==0 ){ if( errno==ERANGE ){ fossil_fatal("pwd too big: max %d", nBuf-1); }else{ fossil_fatal("cannot find current working directory; %s", strerror(errno)); } } #endif } |
︙ | ︙ | |||
921 922 923 924 925 926 927 | #endif blob_resize(pOut, file_simplify_name(blob_buffer(pOut), blob_size(pOut), slash)); } /* ** COMMAND: test-canonical-name | | | 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 | #endif blob_resize(pOut, file_simplify_name(blob_buffer(pOut), blob_size(pOut), slash)); } /* ** COMMAND: test-canonical-name ** ** Usage: %fossil test-canonical-name FILENAME... ** ** Test the operation of the canonical name generator. ** Also test Fossil's ability to measure attributes of a file. */ void cmd_test_canonical_name(void){ int i; |
︙ | ︙ | |||
1276 1277 1278 1279 1280 1281 1282 | } azDirs[1] = fossil_getenv("TEMP"); azDirs[2] = fossil_getenv("TMP"); #endif | | | 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 | } azDirs[1] = fossil_getenv("TEMP"); azDirs[2] = fossil_getenv("TMP"); #endif for(i=0; i<count(azDirs); i++){ if( azDirs[i]==0 ) continue; if( !file_isdir(azDirs[i]) ) continue; zDir = azDirs[i]; break; } /* Check that the output buffer is large enough for the temporary file |
︙ | ︙ | |||
1382 1383 1384 1385 1386 1387 1388 | fossil_path_free(uName); fossil_unicode_free(uMode); #else FILE *f = fopen(zName, zMode); #endif return f; } | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 | fossil_path_free(uName); fossil_unicode_free(uMode); #else FILE *f = fopen(zName, zMode); #endif return f; } /* ** Return non-NULL if zFilename contains pathname elements that ** are reserved on Windows. The returned string is the disallowed ** path element. */ const char *file_is_win_reserved(const char *zPath){ static const char *azRes[] = { "CON", "PRN", "AUX", "NUL", "COM", "LPT" }; static char zReturn[5]; int i; while( zPath[0] ){ for(i=0; i<count(azRes); i++){ if( sqlite3_strnicmp(zPath, azRes[i], 3)==0 && ((i>=4 && fossil_isdigit(zPath[3]) && (zPath[4]=='/' || zPath[4]=='.' || zPath[4]==0)) || (i<4 && (zPath[3]=='/' || zPath[3]=='.' || zPath[3]==0))) ){ sqlite3_snprintf(5,zReturn,"%.*s", i>=4 ? 4 : 3, zPath); return zReturn; } } while( zPath[0] && zPath[0]!='/' ) zPath++; while( zPath[0]=='/' ) zPath++; } return 0; } /* ** COMMAND: test-valid-for-windows ** Usage: fossil test-valid-for-windows FILENAME .... ** ** Show which filenames are not valid for Windows */ void file_test_valid_for_windows(void){ int i; for(i=2; i<g.argc; i++){ fossil_print("%s %s\n", file_is_win_reserved(g.argv[i]), g.argv[i]); } } |
Changes to src/finfo.c.
︙ | ︙ | |||
280 281 282 283 284 285 286 | ** WEBPAGE: finfo ** URL: /finfo?name=FILENAME ** ** Show the change history for a single file. ** ** Additional query parameters: ** | | | > > > > > | 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 | ** WEBPAGE: finfo ** URL: /finfo?name=FILENAME ** ** Show the change history for a single file. ** ** Additional query parameters: ** ** a=DATETIME Only show changes after DATETIME ** b=DATETIME Only show changes before DATETIME ** n=NUM Show the first NUM changes only ** brbg Background color by branch name ** ubg Background color by user name ** ci=UUID Ancestors of a particular check-in ** showid Show RID values for debugging ** ** DATETIME may be "now" or "YYYY-MM-DDTHH:MM:SS.SSS". If in ** year-month-day form, it may be truncated, and it may also name a ** timezone offset from UTC as "-HH:MM" (westward) or "+HH:MM" ** (eastward). Either no timezone suffix or "Z" means UTC. */ void finfo_page(void){ Stmt q; const char *zFilename; char zPrevDate[20]; const char *zA; const char *zB; |
︙ | ︙ | |||
324 325 326 327 328 329 330 | fnid = db_int(0, "SELECT fnid FROM filename WHERE name=%Q", zFilename); if( fnid==0 ){ @ No such file: %h(zFilename) style_footer(); return; } if( g.perm.Admin ){ | | | 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 | fnid = db_int(0, "SELECT fnid FROM filename WHERE name=%Q", zFilename); if( fnid==0 ){ @ No such file: %h(zFilename) style_footer(); return; } if( g.perm.Admin ){ style_submenu_element("MLink Table", "%R/mlink?name=%t", zFilename); } if( baseCheckin ){ compute_direct_ancestors(baseCheckin); } url_add_parameter(&url, "name", zFilename); blob_zero(&sql); blob_append_sql(&sql, |
︙ | ︙ | |||
404 405 406 407 408 409 410 | blob_appendf(&title,"<a href='%R/finfo?name=%T'>%h</a>", zFilename, zFilename); if( fShowId ) blob_appendf(&title, " (%d)", fnid); blob_appendf(&title, " from check-in %z%S</a>", zLink, zUuid); if( fShowId ) blob_appendf(&title, " (%d)", baseCheckin); fossil_free(zUuid); }else{ | | | 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 | blob_appendf(&title,"<a href='%R/finfo?name=%T'>%h</a>", zFilename, zFilename); if( fShowId ) blob_appendf(&title, " (%d)", fnid); blob_appendf(&title, " from check-in %z%S</a>", zLink, zUuid); if( fShowId ) blob_appendf(&title, " (%d)", baseCheckin); fossil_free(zUuid); }else{ blob_appendf(&title, "History of "); hyperlinked_path(zFilename, &title, 0, "tree", ""); if( fShowId ) blob_appendf(&title, " (%d)", fnid); } @ <h2>%b(&title)</h2> blob_reset(&title); pGraph = graph_init(); @ <table id="timelineTable" class="timelineTable"> |
︙ | ︙ | |||
447 448 449 450 451 452 453 | char zTime[10]; int nParent = 0; int aParent[GR_MAX_RAIL]; db_bind_int(&qparent, ":fid", frid); db_bind_int(&qparent, ":mid", fmid); db_bind_int(&qparent, ":fnid", fnid); | | | 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 | char zTime[10]; int nParent = 0; int aParent[GR_MAX_RAIL]; db_bind_int(&qparent, ":fid", frid); db_bind_int(&qparent, ":mid", fmid); db_bind_int(&qparent, ":fnid", fnid); while( db_step(&qparent)==SQLITE_ROW && nParent<count(aParent) ){ aParent[nParent] = db_column_int(&qparent, 0); nParent++; } db_reset(&qparent); if( zBr==0 ) zBr = "trunk"; if( uBg ){ zBgClr = hash_color(zUser); |
︙ | ︙ | |||
529 530 531 532 533 534 535 | if( fpid>0 ){ @ %z(href("%R/fdiff?sbs=1&v1=%!S&v2=%!S",zPUuid,zUuid))[diff]</a> } } if( fDebug & FINFO_DEBUG_MLINK ){ int ii; char *zAncLink; | | | 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 | if( fpid>0 ){ @ %z(href("%R/fdiff?sbs=1&v1=%!S&v2=%!S",zPUuid,zUuid))[diff]</a> } } if( fDebug & FINFO_DEBUG_MLINK ){ int ii; char *zAncLink; @ <br />fid=%d(frid) pid=%d(fpid) mid=%d(fmid) if( nParent>0 ){ @ parents=%d(aParent[0]) for(ii=1; ii<nParent; ii++){ @ %d(aParent[ii]) } } zAncLink = href("%R/finfo?name=%T&ci=%!S&debug=1",zFilename,zCkin); |
︙ | ︙ | |||
571 572 573 574 575 576 577 | ** a particular check-in. This screen is intended for use by developers ** in debugging Fossil. */ void mlink_page(void){ const char *zFName = P("name"); const char *zCI = P("ci"); Stmt q; | | | 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 | ** a particular check-in. This screen is intended for use by developers ** in debugging Fossil. */ void mlink_page(void){ const char *zFName = P("name"); const char *zCI = P("ci"); Stmt q; login_check_credentials(); if( !g.perm.Admin ){ login_needed(g.anon.Admin); return; } style_header("MLINK Table"); if( zFName==0 && zCI==0 ){ @ <span class='generalError'> @ Requires either a name= or ci= query parameter @ </span> |
︙ | ︙ | |||
674 675 676 677 678 679 680 | /* 6 */ " mperm," /* 7 */ " isaux" " FROM mlink WHERE mid=%d ORDER BY 1", mid ); @ <h1>MLINK table for check-in %h(zCI)</h1> render_checkin_context(mid, 1); | | | 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 | /* 6 */ " mperm," /* 7 */ " isaux" " FROM mlink WHERE mid=%d ORDER BY 1", mid ); @ <h1>MLINK table for check-in %h(zCI)</h1> render_checkin_context(mid, 1); @ <hr /> @ <div class='brlist'> @ <table id='mlinktable'> @ <thead><tr> @ <th>File</th> @ <th>From</th> @ <th>Merge?</th> @ <th>New</th> |
︙ | ︙ | |||
698 699 700 701 702 703 704 | const char *zPrior = db_column_text(&q,4); const char *zParent = db_column_text(&q,5); int isExec = db_column_int(&q,6); int isAux = db_column_int(&q,7); @ <tr> @ <td><a href='%R/finfo?name=%t(zName)'>%h(zName)</a></td> if( zParent ){ | | | 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 | const char *zPrior = db_column_text(&q,4); const char *zParent = db_column_text(&q,5); int isExec = db_column_int(&q,6); int isAux = db_column_int(&q,7); @ <tr> @ <td><a href='%R/finfo?name=%t(zName)'>%h(zName)</a></td> if( zParent ){ @ <td><a href='%R/info/%!S(zParent)'>%S(zParent)</a></td> }else{ @ <td><i>(New)</i></td> } @ <td align='center'>%s(isAux?"✓":"")</td> if( zFid ){ @ <td><a href='%R/info/%!S(zFid)'>%S(zFid)</a></td> }else{ |
︙ | ︙ |
Changes to src/foci.c.
︙ | ︙ | |||
12 13 14 15 16 17 18 | ** Author contact information: ** drh@hwaci.com ** http://www.hwaci.com/drh/ ** ******************************************************************************* ** ** This routine implements eponymous virtual table for SQLite that gives | | > | < < < < | | > > > > > > > > > | > > > > > > > > | 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 | ** Author contact information: ** drh@hwaci.com ** http://www.hwaci.com/drh/ ** ******************************************************************************* ** ** This routine implements eponymous virtual table for SQLite that gives ** all of the files associated with a single check-in. The table works ** as a table-valued function. ** ** The source code filename "foci" is short for "Files of Check-in". ** ** Usage example: ** ** SELECT * FROM files_of_checkin('trunk'); ** ** The "schema" for the temp.foci table is: ** ** CREATE TABLE files_of_checkin( ** checkinID INTEGER, -- RID for the check-in manifest ** filename TEXT, -- Name of a file ** uuid TEXT, -- SHA1 hash of the file ** previousName TEXT, -- Name of the file in previous check-in ** perm TEXT, -- Permissions on the file ** symname TEXT HIDDEN -- Symbolic name of the check-in. ** ); ** ** The hidden symname column is (optionally) used as a query parameter to ** identify the particular check-in to parse. The checkinID parameter ** (such is a unique numeric RID rather than symbolic name) can also be used ** to identify the check-in. Example: ** ** SELECT * FROM files_of_checkin ** WHERE checkinID=symbolic_name_to_rid('trunk'); ** */ #include "config.h" #include "foci.h" #include <assert.h> /* ** The schema for the virtual table: */ static const char zFociSchema[] = @ CREATE TABLE files_of_checkin( @ checkinID INTEGER, -- RID for the check-in manifest @ filename TEXT, -- Name of a file @ uuid TEXT, -- SHA1 hash of the file @ previousName TEXT, -- Name of the file in previous check-in @ perm TEXT, -- Permissions on the file @ symname TEXT HIDDEN -- Symbolic name of the check-in @ ); ; #define FOCI_CHECKINID 0 #define FOCI_FILENAME 1 #define FOCI_UUID 2 #define FOCI_PREVNAME 3 #define FOCI_PERM 4 #define FOCI_SYMNAME 5 #if INTERFACE /* ** The subclasses of sqlite3_vtab and sqlite3_vtab_cursor tables ** that implement the files_of_checkin virtual table. */ struct FociTable { |
︙ | ︙ | |||
100 101 102 103 104 105 106 | return SQLITE_OK; } /* ** Available scan methods: ** ** (0) A full scan. Visit every manifest in the repo. (Slow) | | > | > | > > | > > > | 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 | return SQLITE_OK; } /* ** Available scan methods: ** ** (0) A full scan. Visit every manifest in the repo. (Slow) ** (1) checkinID=?. visit only the single manifest specified. ** (2) symName=? visit only the single manifest specified. */ static int fociBestIndex(sqlite3_vtab *tab, sqlite3_index_info *pIdxInfo){ int i; pIdxInfo->estimatedCost = 10000.0; for(i=0; i<pIdxInfo->nConstraint; i++){ if( pIdxInfo->aConstraint[i].op==SQLITE_INDEX_CONSTRAINT_EQ && (pIdxInfo->aConstraint[i].iColumn==FOCI_CHECKINID || pIdxInfo->aConstraint[i].iColumn==FOCI_SYMNAME) ){ if( pIdxInfo->aConstraint[i].iColumn==FOCI_CHECKINID ){ pIdxInfo->idxNum = 1; }else{ pIdxInfo->idxNum = 2; } pIdxInfo->estimatedCost = 1.0; pIdxInfo->aConstraintUsage[i].argvIndex = 1; pIdxInfo->aConstraintUsage[i].omit = 1; break; } } return SQLITE_OK; |
︙ | ︙ | |||
163 164 165 166 167 168 169 | sqlite3_vtab_cursor *pCursor, int idxNum, const char *idxStr, int argc, sqlite3_value **argv ){ FociCursor *pCur = (FociCursor *)pCursor; manifest_destroy(pCur->pMan); if( idxNum ){ | > > > > > > | | | | | | > > | 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 | sqlite3_vtab_cursor *pCursor, int idxNum, const char *idxStr, int argc, sqlite3_value **argv ){ FociCursor *pCur = (FociCursor *)pCursor; manifest_destroy(pCur->pMan); if( idxNum ){ int rid; if( idxNum==1 ){ rid = sqlite3_value_int(argv[0]); }else{ rid = symbolic_name_to_rid((const char*)sqlite3_value_text(argv[0]),"ci"); } pCur->pMan = manifest_get(rid, CFTYPE_MANIFEST, 0); if( pCur->pMan ){ manifest_file_rewind(pCur->pMan); pCur->pFile = manifest_file_next(pCur->pMan, 0); } }else{ pCur->pMan = 0; } pCur->iFile = 0; return SQLITE_OK; } static int fociColumn( sqlite3_vtab_cursor *pCursor, sqlite3_context *ctx, int i ){ FociCursor *pCsr = (FociCursor *)pCursor; switch( i ){ case FOCI_CHECKINID: sqlite3_result_int(ctx, pCsr->pMan->rid); break; case FOCI_FILENAME: sqlite3_result_text(ctx, pCsr->pFile->zName, -1, SQLITE_TRANSIENT); break; case FOCI_UUID: sqlite3_result_text(ctx, pCsr->pFile->zUuid, -1, SQLITE_TRANSIENT); break; case FOCI_PREVNAME: sqlite3_result_text(ctx, pCsr->pFile->zPrior, -1, SQLITE_TRANSIENT); break; case FOCI_PERM: sqlite3_result_text(ctx, pCsr->pFile->zPerm, -1, SQLITE_TRANSIENT); break; case FOCI_SYMNAME: break; } return SQLITE_OK; } static int fociRowid(sqlite3_vtab_cursor *pCursor, sqlite_int64 *pRowid){ FociCursor *pCsr = (FociCursor *)pCursor; |
︙ | ︙ |
Added src/fshell.c.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 | /* ** Copyright (c) 2016 D. Richard Hipp ** ** This program is free software; you can redistribute it and/or ** modify it under the terms of the Simplified BSD License (also ** known as the "2-Clause License" or "FreeBSD License".) ** This program is distributed in the hope that it will be useful, ** but without any warranty; without even the implied warranty of ** merchantability or fitness for a particular purpose. ** ** Author contact information: ** drh@hwaci.com ** http://www.hwaci.com/drh/ ** ******************************************************************************* ** ** This module contains the code that implements the "fossil shell" command. ** ** The fossil shell prompts for lines of user input, then parses each line ** after the fashion of a standard Bourne shell and forks a child process ** to run the corresponding Fossil command. This only works on Unix. ** ** The "fossil shell" command is intended for use with SEE-enabled fossil. ** It allows multiple commands to be issued without having to reenter the ** crypto phasephrase for each command. */ #include "config.h" #include "fshell.h" #include "linenoise.h" #include <ctype.h> #ifndef _WIN32 #include <sys/types.h> #include <sys/wait.h> #endif /* ** COMMAND: shell* ** ** Usage: %fossil shell ** ** Prompt for lines of input from stdin. Parse each line and evaluate ** it as a separate fossil command, in a child process. The initial ** "fossil" is omitted from each line. ** ** This command only works on unix-like platforms that support fork(). ** It is non-functional on Windows. */ void shell_cmd(void){ #ifdef _WIN32 fossil_fatal("the 'shell' command is not supported on windows"); #else int nArg; int mxArg = 0; int n, i; char **azArg = 0; int fDebug; pid_t childPid; char *zLine = 0; fDebug = find_option("debug", 0, 0)!=0; db_find_and_open_repository(OPEN_ANY_SCHEMA|OPEN_OK_NOT_FOUND, 0); db_close(0); sqlite3_shutdown(); while( (free(zLine), zLine = linenoise("fossil> ")) ){ /* Remember shell history within the current session */ linenoiseHistoryAdd(zLine); /* Parse the line of input */ n = (int)strlen(zLine); for(i=0, nArg=1; i<n; i++){ while( fossil_isspace(zLine[i]) ){ i++; } if( i>=n ) break; if( nArg>=mxArg ){ mxArg = nArg+10; azArg = fossil_realloc(azArg, sizeof(char*)*mxArg); if( nArg==1 ) azArg[0] = g.argv[0]; } if( zLine[i]=='"' || zLine[i]=='\'' ){ char cQuote = zLine[i]; i++; azArg[nArg++] = &zLine[i]; for(i++; i<n && zLine[i]!=cQuote; i++){} }else{ azArg[nArg++] = &zLine[i]; while( i<n && !isspace(zLine[i]) ){ i++; } } zLine[i] = 0; } /* If the --debug flag was used, display the parsed arguments */ if( fDebug ){ for(i=1; i<nArg; i++){ fossil_print("argv[%d] = [%s]\n", i, azArg[i]); } } /* Special cases */ if( nArg<2 ) continue; if( fossil_strcmp(azArg[1],"exit")==0 ) break; /* Fork a process to handle the command */ childPid = fork(); if( childPid<0 ){ printf("could not fork a child process to handle the command\n"); fflush(stdout); continue; } if( childPid==0 ){ /* This is the child process */ int main(int, char**); main(nArg, azArg); exit(0); }else{ /* The parent process */ int status; waitpid(childPid, &status, 0); } } #endif } |
Changes to src/fusefs.c.
︙ | ︙ | |||
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 | ** This module implements the userspace side of a Fuse Filesystem that ** contains all check-ins for a fossil repository. ** ** This module is a mostly a no-op unless compiled with -DFOSSIL_HAVE_FUSEFS. ** The FOSSIL_HAVE_FUSEFS should be omitted on systems that lack support for ** the Fuse Filesystem, of course. */ #include "config.h" #include <stdio.h> #include <string.h> #include <errno.h> #include <fcntl.h> #include <stdlib.h> #include <unistd.h> #include <sys/types.h> #include "fusefs.h" | > < | 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 | ** This module implements the userspace side of a Fuse Filesystem that ** contains all check-ins for a fossil repository. ** ** This module is a mostly a no-op unless compiled with -DFOSSIL_HAVE_FUSEFS. ** The FOSSIL_HAVE_FUSEFS should be omitted on systems that lack support for ** the Fuse Filesystem, of course. */ #ifdef FOSSIL_HAVE_FUSEFS #include "config.h" #include <stdio.h> #include <string.h> #include <errno.h> #include <fcntl.h> #include <stdlib.h> #include <unistd.h> #include <sys/types.h> #include "fusefs.h" #define FUSE_USE_VERSION 26 #include <fuse.h> /* ** Global state information about the archive */ |
︙ | ︙ | |||
49 50 51 52 53 54 55 | ManifestFile *pFile; /* Name of a cached file */ Blob content; /* Content of the cached file */ /* Parsed path */ char *az[3]; /* 0=type, 1=id, 2=path */ } fusefs; /* | | | 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 | ManifestFile *pFile; /* Name of a cached file */ Blob content; /* Content of the cached file */ /* Parsed path */ char *az[3]; /* 0=type, 1=id, 2=path */ } fusefs; /* ** Clear the fusefs.az[] array. */ static void fusefs_clear_path(void){ int i; for(i=0; i<count(fusefs.az); i++){ fossil_free(fusefs.az[i]); fusefs.az[i] = 0; } |
︙ | ︙ | |||
205 206 207 208 209 210 211 | fusefs_load_rid(rid, fusefs.az[1]); if( fusefs.pMan==0 ) return -ENOENT; filler(buf, ".", NULL, 0); filler(buf, "..", NULL, 0); manifest_file_rewind(fusefs.pMan); if( n==2 ){ while( (pFile = manifest_file_next(fusefs.pMan, 0))!=0 ){ | | > | 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 | fusefs_load_rid(rid, fusefs.az[1]); if( fusefs.pMan==0 ) return -ENOENT; filler(buf, ".", NULL, 0); filler(buf, "..", NULL, 0); manifest_file_rewind(fusefs.pMan); if( n==2 ){ while( (pFile = manifest_file_next(fusefs.pMan, 0))!=0 ){ if( nPrev>0 && strncmp(pFile->zName, zPrev, nPrev)==0 && pFile->zName[nPrev]=='/' ) continue; zPrev = pFile->zName; for(nPrev=0; zPrev[nPrev] && zPrev[nPrev]!='/'; nPrev++){} z = mprintf("%.*s", nPrev, zPrev); filler(buf, z, NULL, 0); fossil_free(z); cnt++; } |
︙ | ︙ | |||
280 281 282 283 284 285 286 | } static struct fuse_operations fusefs_methods = { .getattr = fusefs_getattr, .readdir = fusefs_readdir, .read = fusefs_read, }; | < | 281 282 283 284 285 286 287 288 289 290 291 292 293 294 | } static struct fuse_operations fusefs_methods = { .getattr = fusefs_getattr, .readdir = fusefs_readdir, .read = fusefs_read, }; /* ** COMMAND: fusefs ** ** Usage: %fossil fusefs [--debug] DIRECTORY ** ** This command uses the Fuse Filesystem (FuseFS) to mount a directory |
︙ | ︙ | |||
312 313 314 315 316 317 318 | ** appropriate support libraries. ** ** After stopping the "fossil fusefs" command, it might also be necessary ** to run "fusermount -u DIRECTORY" to reset the FuseFS before using it ** again. */ void fusefs_cmd(void){ | < < < | 312 313 314 315 316 317 318 319 320 321 322 323 324 325 | ** appropriate support libraries. ** ** After stopping the "fossil fusefs" command, it might also be necessary ** to run "fusermount -u DIRECTORY" to reset the FuseFS before using it ** again. */ void fusefs_cmd(void){ char *zMountPoint; char *azNewArgv[5]; int doDebug = find_option("debug","d",0)!=0; db_find_and_open_repository(0,0); verify_all_options(); blob_init(&fusefs.content, 0, 0); |
︙ | ︙ | |||
336 337 338 339 340 341 342 | azNewArgv[2] = "-s"; azNewArgv[3] = zMountPoint; azNewArgv[4] = 0; g.localOpen = 0; /* Prevent tags like "current" and "prev" */ fuse_main(4, azNewArgv, &fusefs_methods, NULL); fusefs_reset(); fusefs_clear_path(); | < > | 333 334 335 336 337 338 339 340 341 | azNewArgv[2] = "-s"; azNewArgv[3] = zMountPoint; azNewArgv[4] = 0; g.localOpen = 0; /* Prevent tags like "current" and "prev" */ fuse_main(4, azNewArgv, &fusefs_methods, NULL); fusefs_reset(); fusefs_clear_path(); } #endif /* FOSSIL_HAVE_FUSEFS */ |
Changes to src/graph.c.
︙ | ︙ | |||
256 257 258 259 260 261 262 | */ static void assignChildrenToRail(GraphRow *pBottom){ int iRail = pBottom->iRail; GraphRow *pCurrent; GraphRow *pPrior; u64 mask = ((u64)1)<<iRail; | < | 256 257 258 259 260 261 262 263 264 265 266 267 268 269 | */ static void assignChildrenToRail(GraphRow *pBottom){ int iRail = pBottom->iRail; GraphRow *pCurrent; GraphRow *pPrior; u64 mask = ((u64)1)<<iRail; pBottom->railInUse |= mask; pPrior = pBottom; for(pCurrent=pBottom->pChild; pCurrent; pCurrent=pCurrent->pChild){ assert( pPrior->idx > pCurrent->idx ); assert( pCurrent->iRail<0 ); pCurrent->iRail = iRail; pCurrent->railInUse |= mask; |
︙ | ︙ | |||
342 343 344 345 346 347 348 | /* ** Compute the complete graph */ void graph_finish(GraphContext *p, int omitDescenders){ GraphRow *pRow, *pDesc, *pDup, *pLoop, *pParent; | | > > > | 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 | /* ** Compute the complete graph */ void graph_finish(GraphContext *p, int omitDescenders){ GraphRow *pRow, *pDesc, *pDup, *pLoop, *pParent; int i, j; u64 mask; int hasDup = 0; /* True if one or more isDup entries */ const char *zTrunk; int railRid[GR_MAX_RAIL]; /* Maps rails to rids for lines that enter from bottom of screen */ if( p==0 || p->pFirst==0 || p->nErr ) return; p->nErr = 1; /* Assume an error until proven otherwise */ /* Initialize all rows */ p->nHash = p->nRow*2 + 1; p->apHash = safeMalloc( sizeof(p->apHash[0])*p->nHash ); for(pRow=p->pFirst; pRow; pRow=pRow->pNext){ if( pRow->pNext ) pRow->pNext->pPrev = pRow; pRow->iRail = -1; pRow->mergeOut = -1; if( (pDup = hashFind(p, pRow->rid))!=0 ){ hasDup = 1; pDup->isDup = 1; } hashInsert(p, pRow, 1); } p->mxRail = -1; memset(railRid, 0, sizeof(railRid)); /* Purge merge-parents that are out-of-graph if descenders are not ** drawn. ** ** Each node has one primary parent and zero or more "merge" parents. ** A merge parent is a prior check-in from which changes were merged into ** the current check-in. If a merge parent is not in the visible section |
︙ | ︙ | |||
456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 | }else{ pRow->iRail = ++p->mxRail; } if( p->mxRail>=GR_MAX_RAIL ) return; mask = BIT(pRow->iRail); if( !omitDescenders ){ pRow->bDescender = pRow->nParent>0; for(pLoop=pRow; pLoop; pLoop=pLoop->pNext){ pLoop->railInUse |= mask; } } assignChildrenToRail(pRow); } } } /* Assign rails to all rows that are still unassigned. */ for(pRow=p->pLast; pRow; pRow=pRow->pPrev){ int parentRid; if( pRow->iRail>=0 ){ if( pRow->pChild==0 && !pRow->timeWarp ){ | > > > | < < | 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 | }else{ pRow->iRail = ++p->mxRail; } if( p->mxRail>=GR_MAX_RAIL ) return; mask = BIT(pRow->iRail); if( !omitDescenders ){ pRow->bDescender = pRow->nParent>0; if( pRow->bDescender ){ railRid[pRow->iRail] = pRow->aParent[0]; } for(pLoop=pRow; pLoop; pLoop=pLoop->pNext){ pLoop->railInUse |= mask; } } assignChildrenToRail(pRow); } } } /* Assign rails to all rows that are still unassigned. */ for(pRow=p->pLast; pRow; pRow=pRow->pPrev){ int parentRid; if( pRow->iRail>=0 ){ if( pRow->pChild==0 && !pRow->timeWarp ){ if( !omitDescenders && count_nonbranch_children(pRow->rid)!=0 ){ riser_to_top(pRow); } } continue; } if( pRow->isDup ){ continue; |
︙ | ︙ | |||
517 518 519 520 521 522 523 | } } } mask = BIT(pRow->iRail); pRow->railInUse |= mask; if( pRow->pChild ){ assignChildrenToRail(pRow); | | > > > > > > > > | | > > | 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 | } } } mask = BIT(pRow->iRail); pRow->railInUse |= mask; if( pRow->pChild ){ assignChildrenToRail(pRow); }else if( !omitDescenders && count_nonbranch_children(pRow->rid)!=0 ){ riser_to_top(pRow); } if( pParent ){ for(pLoop=pParent->pPrev; pLoop && pLoop!=pRow; pLoop=pLoop->pPrev){ pLoop->railInUse |= mask; } } } /* ** Insert merge rails and merge arrows */ for(pRow=p->pFirst; pRow; pRow=pRow->pNext){ for(i=1; i<pRow->nParent; i++){ int parentRid = pRow->aParent[i]; pDesc = hashFind(p, parentRid); if( pDesc==0 ){ /* Merge from a node that is off-screen */ int iMrail = -1; for(j=0; j<GR_MAX_RAIL; j++){ if( railRid[j]==parentRid ){ iMrail = j; break; } } if( iMrail==-1 ){ iMrail = findFreeRail(p, pRow->idx, p->nRow, 0); if( p->mxRail>=GR_MAX_RAIL ) return; railRid[iMrail] = parentRid; } mask = BIT(iMrail); pRow->mergeIn[iMrail] = 1; pRow->mergeDown |= mask; for(pLoop=pRow->pNext; pLoop; pLoop=pLoop->pNext){ pLoop->railInUse |= mask; } }else{ |
︙ | ︙ |
Changes to src/http_socket.c.
︙ | ︙ | |||
21 22 23 24 25 26 27 | ** This file implements a singleton. A single client socket may be active ** at a time. State information is stored in static variables. The identity ** of the server is held in global variables that are set by url_parse(). ** ** Low-level sockets are abstracted out into this module because they ** are handled different on Unix and windows. */ | | > > < < < | 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 | ** This file implements a singleton. A single client socket may be active ** at a time. State information is stored in static variables. The identity ** of the server is held in global variables that are set by url_parse(). ** ** Low-level sockets are abstracted out into this module because they ** are handled different on Unix and windows. */ #if defined(_WIN32) # define _WIN32_WINNT 0x501 #endif #ifndef __EXTENSIONS__ # define __EXTENSIONS__ 1 /* IPv6 won't compile on Solaris without this */ #endif #include "config.h" #include "http_socket.h" #if defined(_WIN32) # include <winsock2.h> # include <ws2tcpip.h> #else # include <netinet/in.h> # include <arpa/inet.h> # include <sys/socket.h> # include <netdb.h> |
︙ | ︙ |
Changes to src/http_ssl.c.
︙ | ︙ | |||
292 293 294 295 296 297 298 | } #endif SSL_set_mode(ssl, SSL_MODE_AUTO_RETRY); if( !pUrlData->useProxy ){ BIO_set_conn_hostname(iBio, pUrlData->name); | | | 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 | } #endif SSL_set_mode(ssl, SSL_MODE_AUTO_RETRY); if( !pUrlData->useProxy ){ BIO_set_conn_hostname(iBio, pUrlData->name); BIO_ctrl(iBio,BIO_C_SET_CONNECT,3,(char *)&pUrlData->port); if( BIO_do_connect(iBio)<=0 ){ ssl_set_errmsg("SSL: cannot connect to host %s:%d (%s)", pUrlData->name, pUrlData->port, ERR_reason_error_string(ERR_get_error())); ssl_close(); return 1; } } |
︙ | ︙ | |||
387 388 389 390 391 392 393 | /* Set the Global.zIpAddr variable to the server we are talking to. ** This is used to populate the ipaddr column of the rcvfrom table, ** if any files are received from the server. */ { /* IPv4 only code */ | | | 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 | /* Set the Global.zIpAddr variable to the server we are talking to. ** This is used to populate the ipaddr column of the rcvfrom table, ** if any files are received from the server. */ { /* IPv4 only code */ const unsigned char *ip = (const unsigned char *) BIO_ptr_ctrl(iBio,BIO_C_GET_CONNECT,2); g.zIpAddr = mprintf("%d.%d.%d.%d", ip[0], ip[1], ip[2], ip[3]); } X509_free(cert); return 0; } |
︙ | ︙ |
Changes to src/import.c.
︙ | ︙ | |||
32 33 34 35 36 37 38 39 40 41 42 43 44 45 | char *zPrior; /* Prior name if the name was changed */ char isFrom; /* True if obtained from the parent */ char isExe; /* True if executable */ char isLink; /* True if symlink */ }; #endif /* ** State information about an on-going fast-import parse. */ static struct { void (*xFinish)(void); /* Function to finish a prior record */ int nData; /* Bytes of data */ | > > > > > > > > > > | 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 | char *zPrior; /* Prior name if the name was changed */ char isFrom; /* True if obtained from the parent */ char isExe; /* True if executable */ char isLink; /* True if symlink */ }; #endif /* ** State information common to all import types. */ static struct { const char *zTrunkName; /* Name of trunk branch */ const char *zBranchPre; /* Prepended to non-trunk branch names */ const char *zBranchSuf; /* Appended to non-trunk branch names */ const char *zTagPre; /* Prepended to non-trunk tag names */ const char *zTagSuf; /* Appended to non-trunk tag names */ } gimport; /* ** State information about an on-going fast-import parse. */ static struct { void (*xFinish)(void); /* Function to finish a prior record */ int nData; /* Bytes of data */ |
︙ | ︙ | |||
64 65 66 67 68 69 70 | int hasLinks; /* True if git repository contains symlinks */ int tagCommit; /* True if the commit adds a tag */ } gg; /* ** Duplicate a string. */ | | > > | > > > > > > | 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 | int hasLinks; /* True if git repository contains symlinks */ int tagCommit; /* True if the commit adds a tag */ } gg; /* ** Duplicate a string. */ char *fossil_strndup(const char *zOrig, int len){ char *z = 0; if( zOrig ){ int n; if( len<0 ){ n = strlen(zOrig); }else{ for( n=0; zOrig[n] && n<len; ++n ); } z = fossil_malloc( n+1 ); memcpy(z, zOrig, n+1); } return z; } char *fossil_strdup(const char *zOrig){ return fossil_strndup(zOrig, -1); } /* ** A no-op "xFinish" method */ static void finish_noop(void){} |
︙ | ︙ | |||
196 197 198 199 200 201 202 | ** control artifact to the BLOB table. */ static void finish_tag(void){ Blob record, cksum; if( gg.zDate && gg.zTag && gg.zFrom && gg.zUser ){ blob_zero(&record); blob_appendf(&record, "D %s\n", gg.zDate); | | > | 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 | ** control artifact to the BLOB table. */ static void finish_tag(void){ Blob record, cksum; if( gg.zDate && gg.zTag && gg.zFrom && gg.zUser ){ blob_zero(&record); blob_appendf(&record, "D %s\n", gg.zDate); blob_appendf(&record, "T +%F%F%F %s\n", gimport.zTagPre, gg.zTag, gimport.zTagSuf, gg.zFrom); blob_appendf(&record, "U %F\n", gg.zUser); md5sum_blob(&record, &cksum); blob_appendf(&record, "Z %b\n", &cksum); fast_insert_content(&record, 0, 0, 1); blob_reset(&cksum); } import_reset(0); |
︙ | ︙ | |||
273 274 275 276 277 278 279 | zFromBranch = 0; } /* Add the required "T" cards to the manifest. Make sure they are added ** in sorted order and without any duplicates. Otherwise, fossil will not ** recognize the document as a valid manifest. */ if( !gg.tagCommit && fossil_strcmp(zFromBranch, gg.zBranch)!=0 ){ | | > | > | > | | 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 | zFromBranch = 0; } /* Add the required "T" cards to the manifest. Make sure they are added ** in sorted order and without any duplicates. Otherwise, fossil will not ** recognize the document as a valid manifest. */ if( !gg.tagCommit && fossil_strcmp(zFromBranch, gg.zBranch)!=0 ){ aTCard[nTCard++] = mprintf("T *branch * %F%F%F\n", gimport.zBranchPre, gg.zBranch, gimport.zBranchSuf); aTCard[nTCard++] = mprintf("T *sym-%F%F%F *\n", gimport.zBranchPre, gg.zBranch, gimport.zBranchSuf); if( zFromBranch ){ aTCard[nTCard++] = mprintf("T -sym-%F%F%F *\n", gimport.zBranchPre, zFromBranch, gimport.zBranchSuf); } } if( gg.zFrom==0 ){ aTCard[nTCard++] = mprintf("T *sym-%F *\n", gimport.zTrunkName); } qsort(aTCard, nTCard, sizeof(char *), string_cmp); for(i=0; i<nTCard; i++){ if( i==0 || fossil_strcmp(aTCard[i-1], aTCard[i]) ){ blob_appendf(&record, "%s", aTCard[i]); } } |
︙ | ︙ | |||
311 312 313 314 315 316 317 | ** but overwrite that entry if a later instance of the same tag appears. ** ** This behavior seems like a bug in git-fast-export, but it is easier ** to work around the problem than to fix git-fast-export. */ if( gg.tagCommit && gg.zDate && gg.zUser && gg.zFrom ){ blob_appendf(&record, "D %s\n", gg.zDate); | | > | 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 | ** but overwrite that entry if a later instance of the same tag appears. ** ** This behavior seems like a bug in git-fast-export, but it is easier ** to work around the problem than to fix git-fast-export. */ if( gg.tagCommit && gg.zDate && gg.zUser && gg.zFrom ){ blob_appendf(&record, "D %s\n", gg.zDate); blob_appendf(&record, "T +sym-%F%F%F %s\n", gimport.zBranchPre, gg.zBranch, gimport.zBranchSuf, gg.zPrevCheckin); blob_appendf(&record, "U %F\n", gg.zUser); md5sum_blob(&record, &cksum); blob_appendf(&record, "Z %b\n", &cksum); db_multi_exec( "INSERT OR REPLACE INTO xtag(tname, tcontent)" " VALUES(%Q,%Q)", gg.zBranch, blob_str(&record) ); |
︙ | ︙ | |||
747 748 749 750 751 752 753 | const char *zTrunk; /* Name of trunk folder in repo root */ int lenTrunk; /* String length of zTrunk */ const char *zBranches; /* Name of branches folder in repo root */ int lenBranches; /* String length of zBranches */ const char *zTags; /* Name of tags folder in repo root */ int lenTags; /* String length of zTags */ Bag newBranches; /* Branches that were created in this revision */ | | > > > | 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 | const char *zTrunk; /* Name of trunk folder in repo root */ int lenTrunk; /* String length of zTrunk */ const char *zBranches; /* Name of branches folder in repo root */ int lenBranches; /* String length of zBranches */ const char *zTags; /* Name of tags folder in repo root */ int lenTags; /* String length of zTags */ Bag newBranches; /* Branches that were created in this revision */ int revFlag; /* Add svn-rev-nn tags on every checkin */ const char *zRevPre; /* Prepended to revision tag names */ const char *zRevSuf; /* Appended to revision tag names */ const char **azIgnTree; /* NULL-terminated list of dirs to ignore */ } gsvn; typedef struct { char *zKey; char *zVal; } KeyVal; typedef struct { KeyVal *aHeaders; |
︙ | ︙ | |||
923 924 925 926 927 928 929 | } /* ** Returns the UUID for the RID, or NULL if not found. ** The returned string is allocated via db_text() and must be ** free()d by the caller. */ | | < | 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 | } /* ** Returns the UUID for the RID, or NULL if not found. ** The returned string is allocated via db_text() and must be ** free()d by the caller. */ char *rid_to_uuid(int rid){ return db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); } #define SVN_UNKNOWN 0 #define SVN_TRUNK 1 #define SVN_BRANCH 2 #define SVN_TAG 3 |
︙ | ︙ | |||
967 968 969 970 971 972 973 | if( !bag_find(&gsvn.newBranches, branchId) ){ parentRid = db_int(0, "SELECT trid, max(trev) FROM xrevisions" " WHERE trev<%d AND tbranch=%d", gsvn.rev, branchId); } if( parentRid>0 ){ pParentManifest = manifest_get(parentRid, CFTYPE_MANIFEST, 0); | > | | | | | > | 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 | if( !bag_find(&gsvn.newBranches, branchId) ){ parentRid = db_int(0, "SELECT trid, max(trev) FROM xrevisions" " WHERE trev<%d AND tbranch=%d", gsvn.rev, branchId); } if( parentRid>0 ){ pParentManifest = manifest_get(parentRid, CFTYPE_MANIFEST, 0); if( pParentManifest ){ pParentFile = manifest_file_next(pParentManifest, 0); parentBranch = db_int(0, "SELECT tbranch FROM xrevisions WHERE trid=%d", parentRid); if( parentBranch!=branchId && branchType!=SVN_TAG ){ sameAsParent = 0; } } } if( mergeRid<MAX_INT_32 ){ if( gsvn.zComment ){ blob_appendf(&manifest, "C %F\n", gsvn.zComment); }else{ blob_append(&manifest, "C (no\\scomment)\n", 16); |
︙ | ︙ | |||
1016 1017 1018 1019 1020 1021 1022 | char *zParentUuid = rid_to_uuid(parentRid); if( parentRid==mergeRid || mergeRid==0){ char *zParentBranch = db_text(0, "SELECT tname FROM xbranches WHERE tid=%d", parentBranch ); blob_appendf(&manifest, "P %s\n", zParentUuid); | | > | > | | > | > | | > | > | > | | > | > | > | 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 | char *zParentUuid = rid_to_uuid(parentRid); if( parentRid==mergeRid || mergeRid==0){ char *zParentBranch = db_text(0, "SELECT tname FROM xbranches WHERE tid=%d", parentBranch ); blob_appendf(&manifest, "P %s\n", zParentUuid); blob_appendf(&manifest, "T *branch * %F%F%F\n", gimport.zBranchPre, zBranch, gimport.zBranchSuf); blob_appendf(&manifest, "T *sym-%F%F%F *\n", gimport.zBranchPre, zBranch, gimport.zBranchSuf); if( gsvn.revFlag ){ blob_appendf(&manifest, "T +sym-%Fr%d%F *\n", gimport.zTagPre, gsvn.rev, gimport.zTagSuf); } blob_appendf(&manifest, "T -sym-%F%F%F *\n", gimport.zBranchPre, zParentBranch, gimport.zBranchSuf); fossil_free(zParentBranch); }else{ char *zMergeUuid = rid_to_uuid(mergeRid); blob_appendf(&manifest, "P %s %s\n", zParentUuid, zMergeUuid); if( gsvn.revFlag ){ blob_appendf(&manifest, "T +sym-%F%d%F *\n", gsvn.zRevPre, gsvn.rev, gsvn.zRevSuf); } fossil_free(zMergeUuid); } fossil_free(zParentUuid); }else{ blob_appendf(&manifest, "T *branch * %F%F%F\n", gimport.zBranchPre, zBranch, gimport.zBranchSuf); blob_appendf(&manifest, "T *sym-%F%F%F *\n", gimport.zBranchPre, zBranch, gimport.zBranchSuf); if( gsvn.revFlag ){ blob_appendf(&manifest, "T +sym-%F%d%F *\n", gsvn.zRevPre, gsvn.rev, gsvn.zRevSuf); } } }else if( branchType==SVN_TAG ){ char *zParentUuid = rid_to_uuid(parentRid); blob_reset(&manifest); blob_appendf(&manifest, "D %s\n", gsvn.zDate); blob_appendf(&manifest, "T +sym-%F%F%F %s\n", gimport.zTagPre, zBranch, gimport.zTagSuf, zParentUuid); fossil_free(zParentUuid); } }else{ char *zParentUuid = rid_to_uuid(parentRid); blob_appendf(&manifest, "D %s\n", gsvn.zDate); if( branchType!=SVN_TAG ){ blob_appendf(&manifest, "T +closed %s\n", zParentUuid); }else{ blob_appendf(&manifest, "T -sym-%F%F%F %s\n", gimport.zBranchPre, zBranch, gimport.zBranchSuf, zParentUuid); } fossil_free(zParentUuid); } if( gsvn.zUser ){ blob_appendf(&manifest, "U %F\n", gsvn.zUser); }else{ const char *zUserOvrd = find_option("user-override",0,1); |
︙ | ︙ | |||
1151 1152 1153 1154 1155 1156 1157 | } zDiff += lenData; } } /* ** Extract the branch or tag that the given path is on. Return the branch ID. | > | > > > > > > > > > > > > | 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 | } zDiff += lenData; } } /* ** Extract the branch or tag that the given path is on. Return the branch ID. ** Return 0 if not a branch, tag, or trunk, or if ignored by --ignore-tree. */ static int svn_parse_path(char *zPath, char **zFile, int *type){ char *zBranch = 0; int branchId = 0; if( gsvn.azIgnTree ){ const char **pzIgnTree; unsigned nPath = strlen(zPath); for( pzIgnTree = gsvn.azIgnTree; *pzIgnTree; ++pzIgnTree ){ const char *zIgn = *pzIgnTree; int nIgn = strlen(zIgn); if( strncmp(zPath, zIgn, nIgn) == 0 && ( nPath == nIgn || (nPath > nIgn && zPath[nIgn] == '/')) ){ return 0; } } } *type = SVN_UNKNOWN; *zFile = 0; if( gsvn.lenTrunk==0 ){ zBranch = "trunk"; *zFile = zPath; *type = SVN_TRUNK; }else |
︙ | ︙ | |||
1432 1433 1434 1435 1436 1437 1438 | } }else if( strncmp(zAction, "change", 6)==0 ){ int rid = 0; if( zKind==0 ){ fossil_fatal("Missing Node-kind"); } | | | 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 | } }else if( strncmp(zAction, "change", 6)==0 ){ int rid = 0; if( zKind==0 ){ fossil_fatal("Missing Node-kind"); } if( rec.contentFlag && strncmp(zKind, "dir", 3)!=0 ){ if( deltaFlag ){ Blob deltaSrc; Blob target; rid = db_int(0, "SELECT rid FROM blob WHERE uuid=(" " SELECT tuuid FROM xfiles" " WHERE tpath=%Q AND tbranch=%d" ")", zFile, branchId); |
︙ | ︙ | |||
1496 1497 1498 1499 1500 1501 1502 | ** The following formats are currently understood by this command ** ** --git Import from the git-fast-export file format (default) ** Options: ** --import-marks FILE Restore marks table from FILE ** --export-marks FILE Save marks table to FILE ** | | | | > > > > > | | | | | > > > | > > > > > > > > > | | | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | > > > > > > > > > > > | | 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 | ** The following formats are currently understood by this command ** ** --git Import from the git-fast-export file format (default) ** Options: ** --import-marks FILE Restore marks table from FILE ** --export-marks FILE Save marks table to FILE ** ** --svn Import from the svnadmin-dump file format. The default ** behaviour (unless overridden by --flat) is to treat 3 ** folders in the SVN root as special, following the ** common layout of SVN repositories. These are (by ** default) trunk/, branches/ and tags/. The SVN --deltas ** format is supported but not required. ** Options: ** --trunk FOLDER Name of trunk folder ** --branches FOLDER Name of branches folder ** --tags FOLDER Name of tags folder ** --base PATH Path to project root in repository ** --flat The whole dump is a single branch ** --rev-tags Tag each revision, implied by -i ** --no-rev-tags Disables tagging effect of -i ** --rename-rev PAT Rev tag names, default "svn-rev-%" ** --ignore-tree DIR Ignores subtree rooted at DIR ** ** Common Options: ** -i|--incremental allow importing into an existing repository ** -f|--force overwrite repository if already exists ** -q|--quiet omit progress output ** --no-rebuild skip the "rebuilding metadata" step ** --no-vacuum skip the final VACUUM of the database file ** --rename-trunk NAME use NAME as name of imported trunk branch ** --rename-branch PAT rename all branch names using PAT pattern ** --rename-tag PAT rename all tag names using PAT pattern ** ** The --incremental option allows an existing repository to be extended ** with new content. The --rename-* options may be useful to avoid name ** conflicts when using the --incremental option. ** ** The argument to --rename-* contains one "%" character to be replaced ** with the original name. For example, "--rename-tag svn-%-tag" renames ** the tag called "release" to "svn-release-tag". ** ** --ignore-tree is useful for importing Subversion repositories which ** move branches to subdirectories of "branches/deleted" instead of ** deleting them. It can be supplied multiple times if necessary. ** ** See also: export */ void import_cmd(void){ char *zPassword; FILE *pIn; Stmt q; int forceFlag = find_option("force", "f", 0)!=0; int svnFlag = find_option("svn", 0, 0)!=0; int gitFlag = find_option("git", 0, 0)!=0; int omitRebuild = find_option("no-rebuild",0,0)!=0; int omitVacuum = find_option("no-vacuum",0,0)!=0; /* Options common to all input formats */ int incrFlag = find_option("incremental", "i", 0)!=0; /* Options for --svn only */ const char *zBase = ""; int flatFlag = 0; /* Options for --git only */ const char *markfile_in = 0; const char *markfile_out = 0; /* Interpret --rename-* options. Use a table to avoid code duplication. */ const struct { const char *zOpt, **varPre, *zDefaultPre, **varSuf, *zDefaultSuf; int format; /* 1=git, 2=svn, 3=any */ } renOpts[] = { {"rename-branch", &gimport.zBranchPre, "", &gimport.zBranchSuf, "", 3}, {"rename-tag" , &gimport.zTagPre , "", &gimport.zTagSuf , "", 3}, {"rename-rev" , &gsvn.zRevPre, "svn-rev-", &gsvn.zRevSuf , "", 2}, }, *renOpt = renOpts; int i; for( i = 0; i < count(renOpts); ++i, ++renOpt ){ if( 1 << svnFlag & renOpt->format ){ const char *zArgument = find_option(renOpt->zOpt, 0, 1); if( zArgument ){ const char *sep = strchr(zArgument, '%'); if( !sep ){ fossil_fatal("missing '%%' in argument to --%s", renOpt->zOpt); }else if( strchr(sep + 1, '%') ){ fossil_fatal("multiple '%%' in argument to --%s", renOpt->zOpt); } *renOpt->varPre = fossil_malloc(sep - zArgument + 1); memcpy((char *)*renOpt->varPre, zArgument, sep - zArgument); ((char *)*renOpt->varPre)[sep - zArgument] = 0; *renOpt->varSuf = sep + 1; }else{ *renOpt->varPre = renOpt->zDefaultPre; *renOpt->varSuf = renOpt->zDefaultSuf; } } } if( !(gimport.zTrunkName = find_option("rename-trunk", 0, 1)) ){ gimport.zTrunkName = "trunk"; } if( svnFlag ){ /* Get --svn related options here, so verify_all_options() fails when * svn-only options are specified with --git */ const char *zIgnTree; unsigned nIgnTree = 0; while( (zIgnTree = find_option("ignore-tree", 0, 1)) ){ if ( *zIgnTree ){ gsvn.azIgnTree = fossil_realloc(gsvn.azIgnTree, sizeof(*gsvn.azIgnTree) * (nIgnTree + 2)); gsvn.azIgnTree[nIgnTree++] = zIgnTree; gsvn.azIgnTree[nIgnTree] = 0; } } zBase = find_option("base", 0, 1); flatFlag = find_option("flat", 0, 0)!=0; gsvn.zTrunk = find_option("trunk", 0, 1); gsvn.zBranches = find_option("branches", 0, 1); gsvn.zTags = find_option("tags", 0, 1); gsvn.revFlag = find_option("rev-tags", 0, 0) || (incrFlag && !find_option("no-rev-tags", 0, 0)); }else if( gitFlag ){ markfile_in = find_option("import-marks", 0, 1); markfile_out = find_option("export-marks", 0, 1); } verify_all_options(); if( g.argc!=3 && g.argc!=4 ){ |
︙ | ︙ | |||
1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 | ** contains the text of an artifact that will add a tag to a check-in. ** The git-fast-export file format might specify the same tag multiple ** times but only the last tag should be used. And we do not know which ** occurrence of the tag is the last until the import finishes. */ db_multi_exec( "CREATE TEMP TABLE xmark(tname TEXT UNIQUE, trid INT, tuuid TEXT);" "CREATE TEMP TABLE xbranch(tname TEXT UNIQUE, brnm TEXT);" "CREATE TEMP TABLE xtag(tname TEXT UNIQUE, tcontent TEXT);" ); | > | | | | | | | | | | < > > | | | 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 | ** contains the text of an artifact that will add a tag to a check-in. ** The git-fast-export file format might specify the same tag multiple ** times but only the last tag should be used. And we do not know which ** occurrence of the tag is the last until the import finishes. */ db_multi_exec( "CREATE TEMP TABLE xmark(tname TEXT UNIQUE, trid INT, tuuid TEXT);" "CREATE INDEX temp.i_xmark ON xmark(trid);" "CREATE TEMP TABLE xbranch(tname TEXT UNIQUE, brnm TEXT);" "CREATE TEMP TABLE xtag(tname TEXT UNIQUE, tcontent TEXT);" ); if( markfile_in ){ FILE *f = fossil_fopen(markfile_in, "r"); if( !f ){ fossil_fatal("cannot open %s for reading", markfile_in); } if( import_marks(f, &blobs, NULL, NULL)<0 ){ fossil_fatal("error importing marks from file: %s", markfile_in); } fclose(f); } manifest_crosslink_begin(); git_fast_import(pIn); db_prepare(&q, "SELECT tcontent FROM xtag"); while( db_step(&q)==SQLITE_ROW ){ Blob record; db_ephemeral_blob(&q, 0, &record); fast_insert_content(&record, 0, 0, 1); import_reset(0); } db_finalize(&q); if( markfile_out ){ int rid; Stmt q_marks; FILE *f; db_prepare(&q_marks, "SELECT DISTINCT trid FROM xmark"); while( db_step(&q_marks)==SQLITE_ROW ){ rid = db_column_int(&q_marks, 0); if( db_int(0, "SELECT count(objid) FROM event" " WHERE objid=%d AND type='ci'", rid)==0 ){ /* Blob marks exported by git aren't saved between runs, so they need ** to be left free for git to re-use in the future. */ }else{ bag_insert(&vers, rid); } } db_finalize(&q_marks); f = fossil_fopen(markfile_out, "w"); if( !f ){ fossil_fatal("cannot open %s for writing", markfile_out); } export_marks(f, &blobs, &vers); fclose(f); bag_clear(&blobs); bag_clear(&vers); } manifest_crosslink_end(MC_NONE); |
︙ | ︙ |
Changes to src/info.c.
︙ | ︙ | |||
152 153 154 155 156 157 158 159 160 161 162 163 164 165 | while( db_step(&s)==SQLITE_ROW ){ fossil_print("access-url: %-54s %s\n", db_column_text(&s, 0), db_column_text(&s, 1)); } db_finalize(&s); } /* ** COMMAND: info ** ** Usage: %fossil info ?VERSION | REPOSITORY_FILENAME? ?OPTIONS? ** ** With no arguments, provide information about the current tree. | > > > > > > > > > > > | 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 | while( db_step(&s)==SQLITE_ROW ){ fossil_print("access-url: %-54s %s\n", db_column_text(&s, 0), db_column_text(&s, 1)); } db_finalize(&s); } /* ** Show the parent project, if any */ static void showParentProject(void){ const char *zParentCode; zParentCode = db_get("parent-project-code",0); if( zParentCode ){ fossil_print("derived-from: %s %s\n", zParentCode, db_get("parent-project-name","")); } } /* ** COMMAND: info ** ** Usage: %fossil info ?VERSION | REPOSITORY_FILENAME? ?OPTIONS? ** ** With no arguments, provide information about the current tree. |
︙ | ︙ | |||
187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 | if( g.argc==3 && (fsize = file_size(g.argv[2]))>0 && (fsize&0x1ff)==0 ){ db_open_config(0, 0); db_open_repository(g.argv[2]); db_record_repository_filename(g.argv[2]); fossil_print("project-name: %s\n", db_get("project-name", "<unnamed>")); fossil_print("project-code: %s\n", db_get("project-code", "<none>")); extraRepoInfo(); return; } db_find_and_open_repository(0,0); verify_all_options(); if( g.argc==2 ){ int vid; /* 012345678901234 */ db_record_repository_filename(0); fossil_print("project-name: %s\n", db_get("project-name", "<unnamed>")); if( g.localOpen ){ fossil_print("repository: %s\n", db_repository_filename()); fossil_print("local-root: %s\n", g.zLocalRoot); } if( verboseFlag ) extraRepoInfo(); if( g.zConfigDbName ){ fossil_print("config-db: %s\n", g.zConfigDbName); } fossil_print("project-code: %s\n", db_get("project-code", "")); vid = g.localOpen ? db_lget_int("checkout", 0) : 0; if( vid ){ show_common_info(vid, "checkout:", 1, 1); } fossil_print("check-ins: %d\n", db_int(-1, "SELECT count(*) FROM event WHERE type='ci' /*scan*/")); }else{ int rid; rid = name_to_rid(g.argv[2]); if( rid==0 ){ | > > | | 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 | if( g.argc==3 && (fsize = file_size(g.argv[2]))>0 && (fsize&0x1ff)==0 ){ db_open_config(0, 0); db_open_repository(g.argv[2]); db_record_repository_filename(g.argv[2]); fossil_print("project-name: %s\n", db_get("project-name", "<unnamed>")); fossil_print("project-code: %s\n", db_get("project-code", "<none>")); showParentProject(); extraRepoInfo(); return; } db_find_and_open_repository(0,0); verify_all_options(); if( g.argc==2 ){ int vid; /* 012345678901234 */ db_record_repository_filename(0); fossil_print("project-name: %s\n", db_get("project-name", "<unnamed>")); if( g.localOpen ){ fossil_print("repository: %s\n", db_repository_filename()); fossil_print("local-root: %s\n", g.zLocalRoot); } if( verboseFlag ) extraRepoInfo(); if( g.zConfigDbName ){ fossil_print("config-db: %s\n", g.zConfigDbName); } fossil_print("project-code: %s\n", db_get("project-code", "")); showParentProject(); vid = g.localOpen ? db_lget_int("checkout", 0) : 0; if( vid ){ show_common_info(vid, "checkout:", 1, 1); } fossil_print("check-ins: %d\n", db_int(-1, "SELECT count(*) FROM event WHERE type='ci' /*scan*/")); }else{ int rid; rid = name_to_rid(g.argv[2]); if( rid==0 ){ fossil_fatal("no such object: %s", g.argv[2]); } show_common_info(rid, "uuid:", 1, 1); } } /* ** Show information about all tags on a given check-in. |
︙ | ︙ | |||
811 812 813 814 815 816 817 | if( strcmp(zModAction,"approve")==0 ){ moderation_approve(rid); } } style_header("Update of \"%h\"", pWiki->zWikiTitle); zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); zDate = db_text(0, "SELECT datetime(%.17g)", pWiki->rDate); | | | < | < | 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 | if( strcmp(zModAction,"approve")==0 ){ moderation_approve(rid); } } style_header("Update of \"%h\"", pWiki->zWikiTitle); zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); zDate = db_text(0, "SELECT datetime(%.17g)", pWiki->rDate); style_submenu_element("Raw", "artifact/%s", zUuid); style_submenu_element("History", "whistory?name=%t", pWiki->zWikiTitle); style_submenu_element("Page", "wiki?name=%t", pWiki->zWikiTitle); login_anonymous_available(); @ <div class="section">Overview</div> @ <p><table class="label-value"> @ <tr><th>Artifact ID:</th> @ <td>%z(href("%R/artifact/%!S",zUuid))%s(zUuid)</a> if( g.perm.Setup ){ @ (%d(rid)) |
︙ | ︙ | |||
1038 1039 1040 1041 1042 1043 1044 | zFrom = P("from"); zTo = P("to"); if(zGlob && !*zGlob){ zGlob = NULL; } diffFlags = construct_diff_flags(verboseFlag, sideBySide); zW = (diffFlags&DIFF_IGNORE_ALLWS)?"&w":""; | | < | < | | | | | < | | | 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 | zFrom = P("from"); zTo = P("to"); if(zGlob && !*zGlob){ zGlob = NULL; } diffFlags = construct_diff_flags(verboseFlag, sideBySide); zW = (diffFlags&DIFF_IGNORE_ALLWS)?"&w":""; style_submenu_element("Path", "%R/timeline?me=%T&you=%T", zFrom, zTo); if( sideBySide || verboseFlag ){ style_submenu_element("Hide Diff", "%R/vdiff?from=%T&to=%T&sbs=0%s%T%s", zFrom, zTo, zGlob ? "&glob=" : "", zGlob ? zGlob : "", zW); } if( !sideBySide ){ style_submenu_element("Side-by-Side Diff", "%R/vdiff?from=%T&to=%T&sbs=1%s%T%s", zFrom, zTo, zGlob ? "&glob=" : "", zGlob ? zGlob : "", zW); } if( sideBySide || !verboseFlag ) { style_submenu_element("Unified Diff", "%R/vdiff?from=%T&to=%T&sbs=0&v%s%T%s", zFrom, zTo, zGlob ? "&glob=" : "", zGlob ? zGlob : "", zW); } style_submenu_element("Invert", "%R/vdiff?from=%T&to=%T&sbs=%d%s%s%T%s", zTo, zFrom, sideBySide, (verboseFlag && !sideBySide)?"&v":"", zGlob ? "&glob=" : "", zGlob ? zGlob : "", zW); if( zGlob ){ style_submenu_element("Clear glob", "%R/vdiff?from=%T&to=%T&sbs=%d%s%s", zFrom, zTo, sideBySide, (verboseFlag && !sideBySide)?"&v":"", zW); }else{ style_submenu_element("Patch", "%R/vpatch?from=%T&to=%T%s", zFrom, zTo, zW); } if( sideBySide || verboseFlag ){ if( *zW ){ style_submenu_element("Show Whitespace Differences", "%R/vdiff?from=%T&to=%T&sbs=%d%s%s%T", zFrom, zTo, sideBySide, (verboseFlag && !sideBySide)?"&v":"", zGlob ? "&glob=" : "", zGlob ? zGlob : ""); }else{ style_submenu_element("Ignore Whitespace", "%R/vdiff?from=%T&to=%T&sbs=%d%s%s%T&w", zFrom, zTo, sideBySide, (verboseFlag && !sideBySide)?"&v":"", zGlob ? "&glob=" : "", zGlob ? zGlob : ""); } } style_header("Check-in Differences"); if( P("nohdr")==0 ){ |
︙ | ︙ | |||
1347 1348 1349 1350 1351 1352 1353 | objType |= OBJTYPE_CHECKIN; }else if( zType[0]=='e' ){ if( eventTagId != 0) { @ Instance of technote objType |= OBJTYPE_EVENT; hyperlink_to_event_tagid(db_column_int(&q, 5)); }else{ | | | 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 | objType |= OBJTYPE_CHECKIN; }else if( zType[0]=='e' ){ if( eventTagId != 0) { @ Instance of technote objType |= OBJTYPE_EVENT; hyperlink_to_event_tagid(db_column_int(&q, 5)); }else{ @ Attachment to technote } }else{ @ Tag referencing } if( zType[0]!='e' || eventTagId == 0){ hyperlink_to_uuid(zUuid); } |
︙ | ︙ | |||
1499 1500 1501 1502 1503 1504 1505 | zV1 = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", v1); zV2 = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", v2); diffFlags = construct_diff_flags(1, sideBySide) | DIFF_HTML; style_header("Diff"); zW = (diffFlags&DIFF_IGNORE_ALLWS)?"&w":""; if( *zW ){ | | | | | | | 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 | zV1 = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", v1); zV2 = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", v2); diffFlags = construct_diff_flags(1, sideBySide) | DIFF_HTML; style_header("Diff"); zW = (diffFlags&DIFF_IGNORE_ALLWS)?"&w":""; if( *zW ){ style_submenu_element("Show Whitespace Changes", "%s/fdiff?v1=%T&v2=%T&sbs=%d", g.zTop, P("v1"), P("v2"), sideBySide); }else{ style_submenu_element("Ignore Whitespace", "%s/fdiff?v1=%T&v2=%T&sbs=%d&w", g.zTop, P("v1"), P("v2"), sideBySide); } style_submenu_element("Patch", "%s/fdiff?v1=%T&v2=%T&patch", g.zTop, P("v1"), P("v2")); if( !sideBySide ){ style_submenu_element("Side-by-Side Diff", "%s/fdiff?v1=%T&v2=%T&sbs=1%s", g.zTop, P("v1"), P("v2"), zW); }else{ style_submenu_element("Unified Diff", "%s/fdiff?v1=%T&v2=%T&sbs=0%s", g.zTop, P("v1"), P("v2"), zW); } if( P("smhdr")!=0 ){ @ <h2>Differences From Artifact @ %z(href("%R/artifact/%!S",zV1))[%S(zV1)]</a> To |
︙ | ︙ | |||
1657 1658 1659 1660 1661 1662 1663 | rid = name_to_rid_www("name"); login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } if( rid==0 ) fossil_redirect_home(); if( g.perm.Admin ){ const char *zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", rid); if( db_exists("SELECT 1 FROM shun WHERE uuid=%Q", zUuid) ){ | | | < | | | 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 | rid = name_to_rid_www("name"); login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } if( rid==0 ) fossil_redirect_home(); if( g.perm.Admin ){ const char *zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", rid); if( db_exists("SELECT 1 FROM shun WHERE uuid=%Q", zUuid) ){ style_submenu_element("Unshun", "%s/shun?accept=%s&sub=1#delshun", g.zTop, zUuid); }else{ style_submenu_element("Shun", "%s/shun?shun=%s#addshun", g.zTop, zUuid); } } style_header("Hex Artifact Content"); zUuid = db_text("?","SELECT uuid FROM blob WHERE rid=%d", rid); if( g.perm.Setup ){ @ <h2>Artifact %s(zUuid) (%d(rid)):</h2> }else{ @ <h2>Artifact %s(zUuid):</h2> } blob_zero(&downloadName); if( P("verbose")!=0 ) objdescFlags |= OBJDESC_DETAIL; object_description(rid, objdescFlags, &downloadName); style_submenu_element("Download", "%s/raw/%T?name=%s", g.zTop, blob_str(&downloadName), zUuid); @ <hr /> content_get(rid, &content); @ <blockquote><pre> hexdump(&content); @ </pre></blockquote> style_footer(); } |
︙ | ︙ | |||
1880 1881 1882 1883 1884 1885 1886 | cgi_redirectf("%R/raw/%T?name=%s", blob_str(&downloadName), db_text("?", "SELECT uuid FROM blob WHERE rid=%d", rid)); /*NOTREACHED*/ } if( g.perm.Admin ){ const char *zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", rid); if( db_exists("SELECT 1 FROM shun WHERE uuid=%Q", zUuid) ){ | | | < | 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 | cgi_redirectf("%R/raw/%T?name=%s", blob_str(&downloadName), db_text("?", "SELECT uuid FROM blob WHERE rid=%d", rid)); /*NOTREACHED*/ } if( g.perm.Admin ){ const char *zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", rid); if( db_exists("SELECT 1 FROM shun WHERE uuid=%Q", zUuid) ){ style_submenu_element("Unshun", "%s/shun?accept=%s&sub=1#accshun", g.zTop, zUuid); }else{ style_submenu_element("Shun", "%s/shun?shun=%s#addshun", g.zTop, zUuid); } } style_header("%s", descOnly ? "Artifact Description" : "Artifact Content"); zUuid = db_text("?", "SELECT uuid FROM blob WHERE rid=%d", rid); if( g.perm.Setup ){ @ <h2>Artifact %s(zUuid) (%d(rid)):</h2> }else{ |
︙ | ︙ | |||
1910 1911 1912 1913 1914 1915 1916 | const char *zUser = db_column_text(&q,0); const char *zDate = db_column_text(&q,1); const char *zIp = db_column_text(&q,2); @ <p>Received on %s(zDate) from %h(zUser) at %h(zIp).</p> } db_finalize(&q); } | | | | < | < | < | < | < | | | < | | | < | 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 | const char *zUser = db_column_text(&q,0); const char *zDate = db_column_text(&q,1); const char *zIp = db_column_text(&q,2); @ <p>Received on %s(zDate) from %h(zUser) at %h(zIp).</p> } db_finalize(&q); } style_submenu_element("Download", "%R/raw/%T?name=%s", blob_str(&downloadName), zUuid); if( db_exists("SELECT 1 FROM mlink WHERE fid=%d", rid) ){ style_submenu_element("Check-ins Using", "%R/timeline?n=200&uf=%s", zUuid); } asText = P("txt")!=0; zMime = mimetype_from_name(blob_str(&downloadName)); if( zMime ){ if( fossil_strcmp(zMime, "text/html")==0 ){ if( asText ){ style_submenu_element("Html", "%s/artifact/%s", g.zTop, zUuid); }else{ renderAsHtml = 1; style_submenu_element("Text", "%s/artifact/%s?txt=1", g.zTop, zUuid); } }else if( fossil_strcmp(zMime, "text/x-fossil-wiki")==0 || fossil_strcmp(zMime, "text/x-markdown")==0 ){ if( asText ){ style_submenu_element("Wiki", "%s/artifact/%s", g.zTop, zUuid); }else{ renderAsWiki = 1; style_submenu_element("Text", "%s/artifact/%s?txt=1", g.zTop, zUuid); } } } if( (objType & (OBJTYPE_WIKI|OBJTYPE_TICKET))!=0 ){ style_submenu_element("Parsed", "%R/info/%s", zUuid); } if( descOnly ){ style_submenu_element("Content", "%R/artifact/%s", zUuid); }else{ style_submenu_element("Line Numbers", "%R/artifact/%s%s", zUuid, ((zLn&&*zLn) ? "" : "?txt=1&ln=0")); @ <hr /> content_get(rid, &content); if( renderAsWiki ){ wiki_render_by_mimetype(&content, zMime); }else if( renderAsHtml ){ @ <iframe src="%R/raw/%T(blob_str(&downloadName))?name=%s(zUuid)" @ width="100%%" frameborder="0" marginwidth="0" marginheight="0" @ sandbox="allow-same-origin" @ onload="this.height=this.contentDocument.documentElement.scrollHeight;"> @ </iframe> }else{ style_submenu_element("Hex", "%s/hexdump?name=%s", g.zTop, zUuid); blob_to_utf8_no_bom(&content, 0); zMime = mimetype_from_content(&content); @ <blockquote> if( zMime==0 ){ const char *z; z = blob_str(&content); if( zLn ){ output_text_with_line_numbers(z, zLn); }else{ @ <pre> @ %h(z) @ </pre> } }else if( strncmp(zMime, "image/", 6)==0 ){ @ <i>(file is %d(blob_size(&content)) bytes of image data)</i><br /> @ <img src="%R/raw/%s(zUuid)?m=%s(zMime)" /> style_submenu_element("Image", "%R/raw/%s?m=%s", zUuid, zMime); }else{ @ <i>(file is %d(blob_size(&content)) bytes of binary data)</i> } @ </blockquote> } } style_footer(); |
︙ | ︙ | |||
2010 2011 2012 2013 2014 2015 2016 | login_check_credentials(); if( !g.perm.RdTkt ){ login_needed(g.anon.RdTkt); return; } rid = name_to_rid_www("name"); if( rid==0 ){ fossil_redirect_home(); } zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", rid); if( g.perm.Admin ){ if( db_exists("SELECT 1 FROM shun WHERE uuid=%Q", zUuid) ){ | | | < | 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 | login_check_credentials(); if( !g.perm.RdTkt ){ login_needed(g.anon.RdTkt); return; } rid = name_to_rid_www("name"); if( rid==0 ){ fossil_redirect_home(); } zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", rid); if( g.perm.Admin ){ if( db_exists("SELECT 1 FROM shun WHERE uuid=%Q", zUuid) ){ style_submenu_element("Unshun", "%s/shun?accept=%s&sub=1#accshun", g.zTop, zUuid); }else{ style_submenu_element("Shun", "%s/shun?shun=%s#addshun", g.zTop, zUuid); } } pTktChng = manifest_get(rid, CFTYPE_TICKET, 0); if( pTktChng==0 ) fossil_redirect_home(); zDate = db_text(0, "SELECT datetime(%.12f)", pTktChng->rDate); memcpy(zTktName, pTktChng->zTicketUuid, UUID_SIZE+1); if( g.perm.ModTkt && (zModAction = P("modaction"))!=0 ){ |
︙ | ︙ | |||
2045 2046 2047 2048 2049 2050 2051 | moderation_approve(rid); } } zTktTitle = db_table_has_column("repository", "ticket", "title" ) ? db_text("(No title)", "SELECT title FROM ticket WHERE tkt_uuid=%Q", zTktName) : 0; style_header("Ticket Change Details"); | | | | | | | < | | 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 | moderation_approve(rid); } } zTktTitle = db_table_has_column("repository", "ticket", "title" ) ? db_text("(No title)", "SELECT title FROM ticket WHERE tkt_uuid=%Q", zTktName) : 0; style_header("Ticket Change Details"); style_submenu_element("Raw", "%R/artifact/%s", zUuid); style_submenu_element("History", "%R/tkthistory/%s", zTktName); style_submenu_element("Page", "%R/tktview/%t", zTktName); style_submenu_element("Timeline", "%R/tkttimeline/%t", zTktName); if( P("plaintext") ){ style_submenu_element("Formatted", "%R/info/%s", zUuid); }else{ style_submenu_element("Plaintext", "%R/info/%s?plaintext", zUuid); } @ <div class="section">Overview</div> @ <p><table class="label-value"> @ <tr><th>Artifact ID:</th> @ <td>%z(href("%R/artifact/%!S",zUuid))%s(zUuid)</a> if( g.perm.Setup ){ @ (%d(rid)) } modPending = moderation_pending(rid); if( modPending ){ @ <span class="modpending">*** Awaiting Moderator Approval ***</span> } @ <tr><th>Ticket:</th> @ <td>%z(href("%R/tktview/%s",zTktName))%s(zTktName)</a> if( zTktTitle ){ @<br />%h(zTktTitle) } @</td></tr> @ <tr><th>Date:</th><td> hyperlink_to_date(zDate, "</td></tr>"); @ <tr><th>User:</th><td> hyperlink_to_user(pTktChng->zUser, zDate, "</td></tr>"); @ </table> |
︙ | ︙ | |||
2249 2250 2251 2252 2253 2254 2255 | { "#d69b80", 0 }, { "#d1d680", 0 }, { "#91d680", 0 }, { "custom", "##" }, }; | | | 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 | { "#d69b80", 0 }, { "#d1d680", 0 }, { "#91d680", 0 }, { "custom", "##" }, }; int nColor = count(aColor)-1; int stdClrFound = 0; int i; if( zIdPropagate ){ @ <div><label> if( fPropagate ){ @ <input type="checkbox" name="%s(zIdPropagate)" checked="checked" /> |
︙ | ︙ | |||
2823 2824 2825 2826 2827 2828 2829 | ** ** Options: ** ** --author USER Make USER the author for check-in ** -m|--comment COMMENT Make COMMENT the check-in comment ** -M|--message-file FILE Read the amended comment from FILE ** -e|--edit-comment Launch editor to revise comment | | > > > > > > | 2820 2821 2822 2823 2824 2825 2826 2827 2828 2829 2830 2831 2832 2833 2834 2835 2836 2837 2838 2839 2840 2841 2842 2843 2844 2845 2846 2847 | ** ** Options: ** ** --author USER Make USER the author for check-in ** -m|--comment COMMENT Make COMMENT the check-in comment ** -M|--message-file FILE Read the amended comment from FILE ** -e|--edit-comment Launch editor to revise comment ** --date DATETIME Make DATETIME the check-in time ** --bgcolor COLOR Apply COLOR to this check-in ** --branchcolor COLOR Apply and propagate COLOR to the branch ** --tag TAG Add new TAG to this check-in ** --cancel TAG Cancel TAG from this check-in ** --branch NAME Make this check-in the start of branch NAME ** --hide Hide branch starting from this check-in ** --close Mark this "leaf" as closed ** ** DATETIME may be "now" or "YYYY-MM-DDTHH:MM:SS.SSS". If in ** year-month-day form, it may be truncated, the "T" may be replaced by ** a space, and it may also name a timezone offset from UTC as "-HH:MM" ** (westward) or "+HH:MM" (eastward). Either no timezone suffix or "Z" ** means UTC. */ void ci_amend_cmd(void){ int rid; const char *zComment; /* Current comment on the check-in */ const char *zNewComment; /* Revised check-in comment */ const char *zComFile; /* Filename from which to read comment */ const char *zUser; /* Current user for the check-in */ |
︙ | ︙ |
Changes to src/json.c.
︙ | ︙ | |||
1257 1258 1259 1260 1261 1262 1263 | #define INT(OBJ,K) cson_object_set(o, #K, json_new_int(OBJ.K)) #define CSTR(OBJ,K) cson_object_set(o, #K, OBJ.K ? json_new_string(OBJ.K) : cson_value_null()) #define VAL(K,V) cson_object_set(o, #K, (V) ? (V) : cson_value_null()) VAL(capabilities, json_cap_value()); INT(g, argc); INT(g, isConst); | < | 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 | #define INT(OBJ,K) cson_object_set(o, #K, json_new_int(OBJ.K)) #define CSTR(OBJ,K) cson_object_set(o, #K, OBJ.K ? json_new_string(OBJ.K) : cson_value_null()) #define VAL(K,V) cson_object_set(o, #K, (V) ? (V) : cson_value_null()) VAL(capabilities, json_cap_value()); INT(g, argc); INT(g, isConst); CSTR(g, zConfigDbName); INT(g, repositoryOpen); INT(g, localOpen); INT(g, minPrefix); INT(g, fSqlTrace); INT(g, fSqlStats); INT(g, fSqlPrint); |
︙ | ︙ | |||
1294 1295 1296 1297 1298 1299 1300 | INT(g, rcvid); INT(g, okCsrf); INT(g, thTrace); INT(g, isHome); INT(g, nAux); INT(g, allowSymlinks); | < < | 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 | INT(g, rcvid); INT(g, okCsrf); INT(g, thTrace); INT(g, isHome); INT(g, nAux); INT(g, allowSymlinks); CSTR(g, zOpenRevision); CSTR(g, zLocalRoot); CSTR(g, zPath); CSTR(g, zExtra); CSTR(g, zBaseURL); CSTR(g, zTop); CSTR(g, zContentType); |
︙ | ︙ | |||
1903 1904 1905 1906 1907 1908 1909 | ** Implementation of the /json/stat page/command. ** */ cson_value * json_page_stat(){ i64 t, fsize; int n, m; int full; | < | 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 | ** Implementation of the /json/stat page/command. ** */ cson_value * json_page_stat(){ i64 t, fsize; int n, m; int full; enum { BufLen = 1000 }; char zBuf[BufLen]; cson_value * jv = NULL; cson_object * jo = NULL; cson_value * jv2 = NULL; cson_object * jo2 = NULL; char * zTmp = NULL; |
︙ | ︙ | |||
1987 1988 1989 1990 1991 1992 1993 | jv2 = cson_value_new_object(); jo2 = cson_value_get_object(jv2); cson_object_set(jo, "sqlite", jv2); sqlite3_snprintf(BufLen, zBuf, "%.19s [%.10s] (%s)", sqlite3_sourceid(), &sqlite3_sourceid()[20], sqlite3_libversion()); SETBUF(jo2, "version"); | < | | | | | | 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 | jv2 = cson_value_new_object(); jo2 = cson_value_get_object(jv2); cson_object_set(jo, "sqlite", jv2); sqlite3_snprintf(BufLen, zBuf, "%.19s [%.10s] (%s)", sqlite3_sourceid(), &sqlite3_sourceid()[20], sqlite3_libversion()); SETBUF(jo2, "version"); cson_object_set(jo2, "pageCount", cson_value_new_integer((cson_int_t)db_int(0, "PRAGMA repository.page_count"))); cson_object_set(jo2, "pageSize", cson_value_new_integer((cson_int_t)db_int(0, "PRAGMA repository.page_size"))); cson_object_set(jo2, "freeList", cson_value_new_integer((cson_int_t)db_int(0, "PRAGMA repository.freelist_count"))); sqlite3_snprintf(BufLen, zBuf, "%s", db_text(0, "PRAGMA repository.encoding")); SETBUF(jo2, "encoding"); sqlite3_snprintf(BufLen, zBuf, "%s", db_text(0, "PRAGMA repository.journal_mode")); cson_object_set(jo2, "journalMode", *zBuf ? cson_value_new_string(zBuf, strlen(zBuf)) : cson_value_null()); return jv; #undef SETBUF } |
︙ | ︙ |
Changes to src/json_branch.c.
︙ | ︙ | |||
290 291 292 293 294 295 296 | brid = content_put(&branch); if( brid==0 ){ fossil_fatal("Problem committing manifest: %s", g.zErrMsg); } db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d)", brid); if( manifest_crosslink(brid, &branch, MC_PERMIT_HOOKS)==0 ){ | | | 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 | brid = content_put(&branch); if( brid==0 ){ fossil_fatal("Problem committing manifest: %s", g.zErrMsg); } db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d)", brid); if( manifest_crosslink(brid, &branch, MC_PERMIT_HOOKS)==0 ){ fossil_fatal("%s", g.zErrMsg); } assert( blob_is_reset(&branch) ); content_deltify(rootid, brid, 0); if( zNewRid ){ *zNewRid = brid; } |
︙ | ︙ |
Changes to src/json_wiki.c.
︙ | ︙ | |||
113 114 115 116 117 118 119 120 | json_new_int((cson_int_t)(zBody?strlen(zBody):0))); }else{ if( contentFormat>0 ){/*HTML-ize it*/ Blob content = empty_blob; Blob raw = empty_blob; zFormat = "html"; if(zBody && *zBody){ blob_append(&raw,zBody,-1); | > > > > | > > > > > > > > > > > > > > | 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 | json_new_int((cson_int_t)(zBody?strlen(zBody):0))); }else{ if( contentFormat>0 ){/*HTML-ize it*/ Blob content = empty_blob; Blob raw = empty_blob; zFormat = "html"; if(zBody && *zBody){ const char *zMimetype = pWiki->zMimetype; if( zMimetype==0 ) zMimetype = "text/plain"; zMimetype = wiki_filter_mimetypes(zMimetype); blob_append(&raw,zBody,-1); if( fossil_strcmp(zMimetype, "text/x-fossil-wiki")==0 ){ wiki_convert(&raw,&content,0); }else if( fossil_strcmp(zMimetype, "text/x-markdown")==0 ){ markdown_to_html(&raw,0,&content); }else if( fossil_strcmp(zMimetype, "text/plain")==0 ){ htmlize_to_blob(&content,blob_str(&raw),blob_size(&raw)); }else{ json_set_err( FSL_JSON_E_UNKNOWN, "Unsupported MIME type '%s' for wiki page '%s'.", zMimetype, pWiki->zWikiTitle ); blob_reset(&content); blob_reset(&raw); cson_free_object(pay); manifest_destroy(pWiki); return NULL; } len = (unsigned int)blob_size(&content); } cson_object_set(pay,"size",json_new_int((cson_int_t)len)); cson_object_set(pay,"content", cson_value_new_string(blob_buffer(&content),len)); blob_reset(&content); blob_reset(&raw); |
︙ | ︙ |
Changes to src/linenoise.c.
︙ | ︙ | |||
103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 | * Sequence: ESC [ 2 J * Effect: clear the whole screen * */ #include <termios.h> #include <unistd.h> #include <stdlib.h> #include <stdio.h> #include <errno.h> #include <string.h> #include <stdlib.h> #include <ctype.h> #include <sys/types.h> #include <sys/ioctl.h> #include <unistd.h> #include "linenoise.h" #define LINENOISE_DEFAULT_HISTORY_MAX_LEN 100 #define LINENOISE_MAX_LINE 4096 static const char *unsupported_term[] = {"dumb","cons25","emacs",NULL}; static linenoiseCompletionCallback *completionCallback = NULL; static struct termios orig_termios; /* In order to restore at exit.*/ | > > | 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 | * Sequence: ESC [ 2 J * Effect: clear the whole screen * */ #include <termios.h> #include <unistd.h> #include <stdarg.h> #include <stdlib.h> #include <stdio.h> #include <errno.h> #include <string.h> #include <stdlib.h> #include <ctype.h> #include <sys/types.h> #include <sys/ioctl.h> #include <unistd.h> #include "linenoise.h" #include "sqlite3.h" #define LINENOISE_DEFAULT_HISTORY_MAX_LEN 100 #define LINENOISE_MAX_LINE 4096 static const char *unsupported_term[] = {"dumb","cons25","emacs",NULL}; static linenoiseCompletionCallback *completionCallback = NULL; static struct termios orig_termios; /* In order to restore at exit.*/ |
︙ | ︙ | |||
189 190 191 192 193 194 195 196 197 198 199 200 201 202 | } \ fprintf(lndebug_fp, ", " fmt, arg1); \ fflush(lndebug_fp); \ } while (0) #else #define lndebug(fmt, arg1) #endif /* ======================= Low level terminal handling ====================== */ /* Set if to use or not the multi line mode. */ void linenoiseSetMultiLine(int ml) { mlmode = ml; } | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 | } \ fprintf(lndebug_fp, ", " fmt, arg1); \ fflush(lndebug_fp); \ } while (0) #else #define lndebug(fmt, arg1) #endif /* =========================== C89 compatibility ============================ */ /* snprintf() is not C89, but sqlite3_vsnprintf() can be adapted. */ static int linenoiseSnprintf(char *str, size_t size, const char *format, ...) { va_list ap; int result; va_start(ap,format); result = (int)strlen(sqlite3_vsnprintf((int)size,str,format,ap)); va_end(ap); return result; } #undef snprintf #define snprintf linenoiseSnprintf /* strdup() is technically not standard C89 despite being in POSIX. */ static char *linenoiseStrdup(const char *s) { size_t size = strlen(s)+1; char *result = malloc(size); if (result) memcpy(result,s,size); return result; } #undef strdup #define strdup linenoiseStrdup /* strcasecmp() is not standard C89. SQLite offers a direct replacement. */ #undef strcasecmp #define strcasecmp sqlite3_stricmp /* ======================= Low level terminal handling ====================== */ /* Set if to use or not the multi line mode. */ void linenoiseSetMultiLine(int ml) { mlmode = ml; } |
︙ | ︙ |
Changes to src/loadctrl.c.
︙ | ︙ | |||
33 34 35 36 37 38 39 | } #endif return 0.0; } /* ** COMMAND: test-loadavg | | | 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 | } #endif return 0.0; } /* ** COMMAND: test-loadavg ** ** %fossil test-loadavg ** ** Print the load average on the host machine. */ void loadavg_test_cmd(void){ fossil_print("load-average: %f\n", load_average()); } |
︙ | ︙ |
Changes to src/login.c.
︙ | ︙ | |||
172 173 174 175 176 177 178 | } /* ** Make sure the accesslog table exists. Create it if it does not */ void create_accesslog_table(void){ db_multi_exec( | | | | 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 | } /* ** Make sure the accesslog table exists. Create it if it does not */ void create_accesslog_table(void){ db_multi_exec( "CREATE TABLE IF NOT EXISTS repository.accesslog(" " uname TEXT," " ipaddr TEXT," " success BOOLEAN," " mtime TIMESTAMP" ");" ); } /* ** Make a record of a login attempt, if login record keeping is enabled. */ static void record_login_attempt( |
︙ | ︙ | |||
391 392 393 394 395 396 397 398 399 400 401 402 403 404 | if( prefix_match("spider", zAgent+i) ) return 0; if( prefix_match("crawl", zAgent+i) ) return 0; /* If a URI appears in the User-Agent, it is probably a bot */ if( strncmp("http", zAgent+i,4)==0 ) return 0; } if( strncmp(zAgent, "Mozilla/", 8)==0 ){ if( atoi(&zAgent[8])<4 ) return 0; /* Many bots advertise as Mozilla/3 */ if( sqlite3_strglob("*Firefox/[1-9]*", zAgent)==0 ) return 1; if( sqlite3_strglob("*Chrome/[1-9]*", zAgent)==0 ) return 1; if( sqlite3_strglob("*(compatible;?MSIE?[1789]*", zAgent)==0 ) return 1; if( sqlite3_strglob("*Trident/[1-9]*;?rv:[1-9]*", zAgent)==0 ) return 1; /* IE11+ */ if( sqlite3_strglob("*AppleWebKit/[1-9]*(KHTML*", zAgent)==0 ) return 1; return 0; } | > > > > > > > | 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 | if( prefix_match("spider", zAgent+i) ) return 0; if( prefix_match("crawl", zAgent+i) ) return 0; /* If a URI appears in the User-Agent, it is probably a bot */ if( strncmp("http", zAgent+i,4)==0 ) return 0; } if( strncmp(zAgent, "Mozilla/", 8)==0 ){ if( atoi(&zAgent[8])<4 ) return 0; /* Many bots advertise as Mozilla/3 */ /* 2016-05-30: A pernicious spider that likes to walk Fossil timelines has ** been detected on the SQLite website. The spider changes its user-agent ** string frequently, but it always seems to include the following text: */ if( sqlite3_strglob("*Safari/537.36Mozilla/5.0*", zAgent)==0 ) return 0; if( sqlite3_strglob("*Firefox/[1-9]*", zAgent)==0 ) return 1; if( sqlite3_strglob("*Chrome/[1-9]*", zAgent)==0 ) return 1; if( sqlite3_strglob("*(compatible;?MSIE?[1789]*", zAgent)==0 ) return 1; if( sqlite3_strglob("*Trident/[1-9]*;?rv:[1-9]*", zAgent)==0 ) return 1; /* IE11+ */ if( sqlite3_strglob("*AppleWebKit/[1-9]*(KHTML*", zAgent)==0 ) return 1; return 0; } |
︙ | ︙ | |||
986 987 988 989 990 991 992 | /* Set the global variables recording the userid and login. The ** "nobody" user is a special case in that g.zLogin==0. */ g.userUid = uid; if( fossil_strcmp(g.zLogin,"nobody")==0 ){ g.zLogin = 0; } | > | > > > > > < | > > > > > > > > > > > > | 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 | /* Set the global variables recording the userid and login. The ** "nobody" user is a special case in that g.zLogin==0. */ g.userUid = uid; if( fossil_strcmp(g.zLogin,"nobody")==0 ){ g.zLogin = 0; } if( PB("isrobot") ){ g.isHuman = 0; }else if( g.zLogin==0 ){ g.isHuman = isHuman(P("HTTP_USER_AGENT")); }else{ g.isHuman = 1; } /* Set the capabilities */ login_replace_capabilities(zCap, 0); /* The auto-hyperlink setting allows hyperlinks to be displayed for users ** who do not have the "h" permission as long as their UserAgent string ** makes it appear that they are human. Check to see if auto-hyperlink is ** enabled for this repository and make appropriate adjustments to the ** permission flags if it is. This should be done before the permissions ** are (potentially) copied to the anonymous permission set; otherwise, ** those will be out-of-sync. */ if( zCap[0] && !g.perm.Hyperlink && g.isHuman && db_get_boolean("auto-hyperlink",1) ){ g.perm.Hyperlink = 1; g.javascriptHyperlink = 1; } /* ** At this point, the capabilities for the logged in user are not going ** to be modified anymore; therefore, we can copy them over to the ones ** for the anonymous user. ** ** WARNING: In the future, please do not add code after this point that ** modifies the capabilities for the logged in user. */ login_set_anon_nobody_capabilities(); /* If the public-pages glob pattern is defined and REQUEST_URI matches ** one of the globs in public-pages, then also add in all default-perms ** permissions. */ zPublicPages = db_get("public-pages",0); if( zPublicPages!=0 ){ Glob *pGlob = glob_create(zPublicPages); |
︙ | ︙ | |||
1080 1081 1082 1083 1084 1085 1086 | case 's': p->Setup = 1; /* Fall thru into Admin */ case 'a': p->Admin = p->RdTkt = p->WrTkt = p->Zip = p->RdWiki = p->WrWiki = p->NewWiki = p->ApndWiki = p->Hyperlink = p->Clone = p->NewTkt = p->Password = p->RdAddr = p->TktFmt = p->Attach = p->ApndTkt = p->ModWiki = p->ModTkt = p->Delete = | | | 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 | case 's': p->Setup = 1; /* Fall thru into Admin */ case 'a': p->Admin = p->RdTkt = p->WrTkt = p->Zip = p->RdWiki = p->WrWiki = p->NewWiki = p->ApndWiki = p->Hyperlink = p->Clone = p->NewTkt = p->Password = p->RdAddr = p->TktFmt = p->Attach = p->ApndTkt = p->ModWiki = p->ModTkt = p->Delete = p->WrUnver = p->Private = 1; /* Fall thru into Read/Write */ case 'i': p->Read = p->Write = 1; break; case 'o': p->Read = 1; break; case 'z': p->Zip = 1; break; case 'd': p->Delete = 1; break; case 'h': p->Hyperlink = 1; break; |
︙ | ︙ | |||
1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 | case 'w': p->WrTkt = p->RdTkt = p->NewTkt = p->ApndTkt = 1; break; case 'c': p->ApndTkt = 1; break; case 'q': p->ModTkt = 1; break; case 't': p->TktFmt = 1; break; case 'b': p->Attach = 1; break; case 'x': p->Private = 1; break; /* The "u" privileges is a little different. It recursively ** inherits all privileges of the user named "reader" */ case 'u': { if( (flags & LOGIN_IGNORE_UV)==0 ){ const char *zUser; zUser = db_text("", "SELECT cap FROM user WHERE login='reader'"); | > | 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 | case 'w': p->WrTkt = p->RdTkt = p->NewTkt = p->ApndTkt = 1; break; case 'c': p->ApndTkt = 1; break; case 'q': p->ModTkt = 1; break; case 't': p->TktFmt = 1; break; case 'b': p->Attach = 1; break; case 'x': p->Private = 1; break; case 'y': p->WrUnver = 1; break; /* The "u" privileges is a little different. It recursively ** inherits all privileges of the user named "reader" */ case 'u': { if( (flags & LOGIN_IGNORE_UV)==0 ){ const char *zUser; zUser = db_text("", "SELECT cap FROM user WHERE login='reader'"); |
︙ | ︙ | |||
1178 1179 1180 1181 1182 1183 1184 | case 'r': rc = p->RdTkt; break; case 's': rc = p->Setup; break; case 't': rc = p->TktFmt; break; /* case 'u': READER */ /* case 'v': DEVELOPER */ case 'w': rc = p->WrTkt; break; case 'x': rc = p->Private; break; | | | 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 | case 'r': rc = p->RdTkt; break; case 's': rc = p->Setup; break; case 't': rc = p->TktFmt; break; /* case 'u': READER */ /* case 'v': DEVELOPER */ case 'w': rc = p->WrTkt; break; case 'x': rc = p->Private; break; case 'y': rc = p->WrUnver; break; case 'z': rc = p->Zip; break; default: rc = 0; break; } } return rc; } |
︙ | ︙ | |||
1539 1540 1541 1542 1543 1544 1545 | char *zSelfRepo; /* Name of our repository */ char *zSelfLabel; /* Project-name for our repository */ char *zSelfProjCode; /* Our project-code */ char *zSql; /* SQL to run on all peers */ const char *zSelf; /* The ATTACH name of our repository */ *pzErrMsg = 0; /* Default to no errors */ | | | 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 | char *zSelfRepo; /* Name of our repository */ char *zSelfLabel; /* Project-name for our repository */ char *zSelfProjCode; /* Our project-code */ char *zSql; /* SQL to run on all peers */ const char *zSelf; /* The ATTACH name of our repository */ *pzErrMsg = 0; /* Default to no errors */ zSelf = "repository"; /* Get the full pathname of the other repository */ file_canonical_name(zRepo, &fullName, 0); zRepo = fossil_strdup(blob_str(&fullName)); blob_reset(&fullName); /* Get the full pathname for our repository. Also the project code |
︙ | ︙ |
Changes to src/lookslike.c.
︙ | ︙ | |||
48 49 50 51 52 53 54 55 56 57 58 59 60 61 | #define LOOK_ODD ((int)0x00000080) /* An odd number of bytes was found. */ #define LOOK_SHORT ((int)0x00000100) /* Unable to perform full check. */ #define LOOK_INVALID ((int)0x00000200) /* Invalid sequence was found. */ #define LOOK_BINARY (LOOK_NUL | LOOK_LONG | LOOK_SHORT) /* May be binary. */ #define LOOK_EOL (LOOK_LONE_CR | LOOK_LONE_LF | LOOK_CRLF) /* Line seps. */ #endif /* INTERFACE */ /* ** This function attempts to scan each logical line within the blob to ** determine the type of content it appears to contain. The return value ** is a combination of one or more of the LOOK_XXX flags (see above): ** ** !LOOK_BINARY -- The content appears to consist entirely of text; however, | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 | #define LOOK_ODD ((int)0x00000080) /* An odd number of bytes was found. */ #define LOOK_SHORT ((int)0x00000100) /* Unable to perform full check. */ #define LOOK_INVALID ((int)0x00000200) /* Invalid sequence was found. */ #define LOOK_BINARY (LOOK_NUL | LOOK_LONG | LOOK_SHORT) /* May be binary. */ #define LOOK_EOL (LOOK_LONE_CR | LOOK_LONE_LF | LOOK_CRLF) /* Line seps. */ #endif /* INTERFACE */ /* definitions for various UTF-8 sequence lengths, encoded as start value * and size of each valid range belonging to some lead byte*/ #define US2A 0x80, 0x01 /* for lead byte 0xC0 */ #define US2B 0x80, 0x40 /* for lead bytes 0xC2-0xDF */ #define US3A 0xA0, 0x20 /* for lead byte 0xE0 */ #define US3B 0x80, 0x40 /* for lead bytes 0xE1-0xEF */ #define US4A 0x90, 0x30 /* for lead byte 0xF0 */ #define US4B 0x80, 0x40 /* for lead bytes 0xF1-0xF3 */ #define US4C 0x80, 0x10 /* for lead byte 0xF4 */ #define US0A 0x00, 0x00 /* for any other lead byte */ /* a table used for quick lookup of the definition that goes with a * particular lead byte */ static const unsigned char lb_tab[] = { US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US2A, US0A, US2B, US2B, US2B, US2B, US2B, US2B, US2B, US2B, US2B, US2B, US2B, US2B, US2B, US2B, US2B, US2B, US2B, US2B, US2B, US2B, US2B, US2B, US2B, US2B, US2B, US2B, US2B, US2B, US2B, US2B, US3A, US3B, US3B, US3B, US3B, US3B, US3B, US3B, US3B, US3B, US3B, US3B, US3B, US3B, US3B, US3B, US4A, US4B, US4B, US4B, US4C, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A, US0A }; /* ** This function attempts to scan each logical line within the blob to ** determine the type of content it appears to contain. The return value ** is a combination of one or more of the LOOK_XXX flags (see above): ** ** !LOOK_BINARY -- The content appears to consist entirely of text; however, |
︙ | ︙ | |||
130 131 132 133 134 135 136 | } if( j>LENGTH_MASK ){ flags |= LOOK_LONG; /* Very long line -> binary */ } return flags; } | < | | > > | | | | | | | > > | < < | > | > | > > > | | < | 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 | } if( j>LENGTH_MASK ){ flags |= LOOK_LONG; /* Very long line -> binary */ } return flags; } /* ** Checks for proper UTF-8. It uses the method described in: ** http://en.wikipedia.org/wiki/UTF-8#Invalid_byte_sequences ** except for the "overlong form" of \u0000 which is not considered ** invalid here: Some languages like Java and Tcl use it. This function ** also considers valid the derivatives CESU-8 & WTF-8 (as described in ** the same wikipedia article referenced previously). For UTF-8 characters ** > 0x7f, the variable 'c' not necessary means the real lead byte. ** It's number of higher 1-bits indicate the number of continuation ** bytes that are expected to be followed. E.g. when 'c' has a value ** in the range 0xc0..0xdf it means that after 'c' a single continuation ** byte is expected. A value 0xe0..0xef means that after 'c' two more ** continuation bytes are expected. */ int invalid_utf8( const Blob *pContent ){ const unsigned char *z = (unsigned char *) blob_buffer(pContent); unsigned int n = blob_size(pContent); unsigned char c; /* lead byte to be handled. */ if( n==0 ) return 0; /* Empty file -> OK */ c = *z; while( --n>0 ){ if( c>=0x80 ){ const unsigned char *def; /* pointer to range table*/ c <<= 1; /* multiply by 2 and get rid of highest bit */ def = &lb_tab[c]; /* search fb's valid range in table */ if( (unsigned int)(*++z-def[0])>=def[1] ){ return LOOK_INVALID; /* Invalid UTF-8 */ } c = (c>=0xC0) ? (c|3) : ' '; /* determine next lead byte */ } else { c = *++z; } } return (c>=0x80) ? LOOK_INVALID : 0; /* Final lead byte must be ASCII. */ } /* ** Define the type needed to represent a Unicode (UTF-16) character. */ #ifndef WCHAR_T # ifdef _WIN32 # define WCHAR_T wchar_t |
︙ | ︙ | |||
396 397 398 399 400 401 402 | fUnicode = 0; }else{ fUnicode = could_be_utf16(&blob, 0) || fForceUtf16; } if( fUnicode ){ lookFlags = looks_like_utf16(&blob, bRevUtf16, 0); }else{ | | | 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 | fUnicode = 0; }else{ fUnicode = could_be_utf16(&blob, 0) || fForceUtf16; } if( fUnicode ){ lookFlags = looks_like_utf16(&blob, bRevUtf16, 0); }else{ lookFlags = looks_like_utf8(&blob, 0) | invalid_utf8(&blob); } } fossil_print("File \"%s\" has %d bytes.\n",g.argv[2],blob_size(&blob)); fossil_print("Starts with UTF-8 BOM: %s\n",fUtf8?"yes":"no"); fossil_print("Starts with UTF-16 BOM: %s\n", fUtf16?(bRevUtf16?"reversed":"yes"):"no"); fossil_print("Looks like UTF-%s: %s\n",fUnicode?"16":"8", |
︙ | ︙ |
Changes to src/main.c.
︙ | ︙ | |||
16 17 18 19 20 21 22 23 24 25 26 27 28 29 | ******************************************************************************* ** ** This module codes the main() procedure that runs first when the ** program is invoked. */ #include "VERSION.h" #include "config.h" #include "main.h" #include <string.h> #include <time.h> #include <fcntl.h> #include <sys/types.h> #include <sys/stat.h> #include <stdlib.h> /* atexit() */ | > > > | < < < < < < < | 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 | ******************************************************************************* ** ** This module codes the main() procedure that runs first when the ** program is invoked. */ #include "VERSION.h" #include "config.h" #if defined(_WIN32) # include <windows.h> #endif #include "main.h" #include <string.h> #include <time.h> #include <fcntl.h> #include <sys/types.h> #include <sys/stat.h> #include <stdlib.h> /* atexit() */ #if !defined(_WIN32) # include <errno.h> /* errno global */ #endif #ifdef FOSSIL_ENABLE_SSL # include "openssl/crypto.h" #endif #if defined(FOSSIL_ENABLE_MINIZ) # define MINIZ_HEADER_FILE_ONLY # include "miniz.c" #else # include <zlib.h> #endif #if INTERFACE #ifdef FOSSIL_ENABLE_TCL # include "tcl.h" #endif #ifdef FOSSIL_ENABLE_JSON # include "cson_amalgamation.h" /* JSON API. */ # include "json_detail.h" #endif /* ** Size of a UUID in characters */ #define UUID_SIZE 40 /* ** Maximum number of auxiliary parameters on reports |
︙ | ︙ | |||
89 90 91 92 93 94 95 96 97 98 99 100 101 102 | char WrTkt; /* w: make changes to tickets via web */ char ModTkt; /* q: approve and publish ticket changes (Moderator) */ char Attach; /* b: add attachments */ char TktFmt; /* t: create new ticket report formats */ char RdAddr; /* e: read email addresses or other private data */ char Zip; /* z: download zipped artifact via /zip URL */ char Private; /* x: can send and receive private content */ }; #ifdef FOSSIL_ENABLE_TCL /* ** All Tcl related context information is in this structure. This structure ** definition has been copied from and should be kept in sync with the one in ** "th_tcl.c". | > | 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 | char WrTkt; /* w: make changes to tickets via web */ char ModTkt; /* q: approve and publish ticket changes (Moderator) */ char Attach; /* b: add attachments */ char TktFmt; /* t: create new ticket report formats */ char RdAddr; /* e: read email addresses or other private data */ char Zip; /* z: download zipped artifact via /zip URL */ char Private; /* x: can send and receive private content */ char WrUnver; /* y: can push unversioned content */ }; #ifdef FOSSIL_ENABLE_TCL /* ** All Tcl related context information is in this structure. This structure ** definition has been copied from and should be kept in sync with the one in ** "th_tcl.c". |
︙ | ︙ | |||
125 126 127 128 129 130 131 | char *nameOfExe; /* Full path of executable. */ const char *zErrlog; /* Log errors to this file, if not NULL */ int isConst; /* True if the output is unchanging & cacheable */ const char *zVfsName; /* The VFS to use for database connections */ sqlite3 *db; /* The connection to the databases */ sqlite3 *dbConfig; /* Separate connection for global_config table */ char *zAuxSchema; /* Main repository aux-schema */ | | | | < < | 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 | char *nameOfExe; /* Full path of executable. */ const char *zErrlog; /* Log errors to this file, if not NULL */ int isConst; /* True if the output is unchanging & cacheable */ const char *zVfsName; /* The VFS to use for database connections */ sqlite3 *db; /* The connection to the databases */ sqlite3 *dbConfig; /* Separate connection for global_config table */ char *zAuxSchema; /* Main repository aux-schema */ int dbIgnoreErrors; /* Ignore database errors if true */ const char *zConfigDbName;/* Path of the config database. NULL if not open */ sqlite3_int64 now; /* Seconds since 1970 */ int repositoryOpen; /* True if the main repository database is open */ char *zRepositoryOption; /* Most recent cached repository option value */ char *zRepositoryName; /* Name of the repository database file */ char *zLocalDbName; /* Name of the local database file */ char *zOpenRevision; /* Check-in version to use during database open */ int localOpen; /* True if the local database is open */ char *zLocalRoot; /* The directory holding the local database */ int minPrefix; /* Number of digits needed for a distinct UUID */ int fSqlTrace; /* True if --sqltrace flag is present */ int fSqlStats; /* True if --sqltrace or --sqlstats are present */ int fSqlPrint; /* True if -sqlprint flag is present */ |
︙ | ︙ | |||
293 294 295 296 297 298 299 300 | */ #define CGIDEBUG(X) if( g.fDebug ) cgi_debug X #endif Global g; /* | | < < | < | > | < < < < < < < > | < < < < < < < < < < < < < < < > | < < < < < < < < < < < < < < < < < < < < < | | < | | < > > | 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 | */ #define CGIDEBUG(X) if( g.fDebug ) cgi_debug X #endif Global g; /* ** atexit() handler which frees up "some" of the resources ** used by fossil. */ static void fossil_atexit(void) { #if USE_SEE /* ** Zero, unlock, and free the saved database encryption key now. */ db_unsave_encryption_key(); #endif #if defined(_WIN32) || defined(__BIONIC__) /* ** Free the secure getpass() buffer now. */ freepass(); #endif #if defined(_WIN32) && !defined(_WIN64) && defined(FOSSIL_ENABLE_TCL) && \ defined(USE_TCL_STUBS) /* ** If Tcl is compiled on Windows using the latest MinGW, Fossil can crash ** when exiting while a stubs-enabled Tcl is still loaded. This is due to ** a bug in MinGW, see: ** |
︙ | ︙ | |||
556 557 558 559 560 561 562 563 564 565 566 567 568 569 | static void fossil_sqlite_log(void *notUsed, int iCode, const char *zErrmsg){ #ifdef __APPLE__ /* Disable the file alias warning on apple products because Time Machine ** creates lots of aliases and the warning alarms people. */ if( iCode==SQLITE_WARNING ) return; #endif if( iCode==SQLITE_SCHEMA ) return; fossil_warning("%s: %s", fossil_sqlite_return_code_name(iCode), zErrmsg); } /* ** This function attempts to find command line options known to contain ** bitwise flags and initializes the associated global variables. After ** this function executes, all global variables (i.e. in the "g" struct) | > | 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 | static void fossil_sqlite_log(void *notUsed, int iCode, const char *zErrmsg){ #ifdef __APPLE__ /* Disable the file alias warning on apple products because Time Machine ** creates lots of aliases and the warning alarms people. */ if( iCode==SQLITE_WARNING ) return; #endif if( iCode==SQLITE_SCHEMA ) return; if( g.dbIgnoreErrors ) return; fossil_warning("%s: %s", fossil_sqlite_return_code_name(iCode), zErrmsg); } /* ** This function attempts to find command line options known to contain ** bitwise flags and initializes the associated global variables. After ** this function executes, all global variables (i.e. in the "g" struct) |
︙ | ︙ | |||
588 589 590 591 592 593 594 | #if defined(_WIN32) int _CRT_glob = 0x0001; /* See MinGW bug #2062 */ #endif int main(int argc, char **argv) #endif { const char *zCmdName = "unknown"; | | | | | 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 | #if defined(_WIN32) int _CRT_glob = 0x0001; /* See MinGW bug #2062 */ #endif int main(int argc, char **argv) #endif { const char *zCmdName = "unknown"; const CmdOrPage *pCmd = 0; int rc; if( sqlite3_libversion_number()<3014000 ){ fossil_fatal("Unsuitable SQLite version %s, must be at least 3.14.0", sqlite3_libversion()); } sqlite3_config(SQLITE_CONFIG_MULTITHREAD); sqlite3_config(SQLITE_CONFIG_LOG, fossil_sqlite_log, 0); memset(&g, 0, sizeof(g)); g.now = time(0); g.httpHeader = empty_blob; |
︙ | ︙ | |||
732 733 734 735 736 737 738 | g.httpOut = stdout; g.fullHttpReply = !g.isHTTP; fossil_fatal("file descriptor 2 is not open. (fd=%d, errno=%d)", fd, x); } } #endif | | | < | | < < < < < | 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 | g.httpOut = stdout; g.fullHttpReply = !g.isHTTP; fossil_fatal("file descriptor 2 is not open. (fd=%d, errno=%d)", fd, x); } } #endif rc = dispatch_name_search(zCmdName, CMDFLAG_COMMAND|CMDFLAG_PREFIX, &pCmd); if( rc==1 ){ #ifdef FOSSIL_ENABLE_TH1_HOOKS if( !g.isHTTP && !g.fNoThHook ){ rc = Th_CommandHook(zCmdName, 0); }else{ rc = TH_OK; } if( rc==TH_OK || rc==TH_RETURN || rc==TH_CONTINUE ){ if( rc==TH_OK || rc==TH_RETURN ){ #endif fossil_fatal("%s: unknown command: %s\n" "%s: use \"help\" for more information", g.argv[0], zCmdName, g.argv[0]); #ifdef FOSSIL_ENABLE_TH1_HOOKS } if( !g.isHTTP && !g.fNoThHook && (rc==TH_OK || rc==TH_CONTINUE) ){ Th_CommandNotify(zCmdName, 0); } } fossil_exit(0); #endif }else if( rc==2 ){ Blob couldbe; blob_init(&couldbe,0,0); dispatch_matching_names(zCmdName, &couldbe); fossil_print("%s: ambiguous command prefix: %s\n" "%s: could be any of:%s\n" "%s: use \"help\" for more information\n", g.argv[0], zCmdName, g.argv[0], blob_str(&couldbe), g.argv[0]); fossil_exit(1); } atexit( fossil_atexit ); |
︙ | ︙ | |||
790 791 792 793 794 795 796 | ** TH_RETURN: The xFunc() will be executed, the TH1 notification will be ** skipped. ** ** TH_CONTINUE: The xFunc() will be skipped, the TH1 notification will be ** executed. */ if( !g.isHTTP && !g.fNoThHook ){ | | | | | 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 | ** TH_RETURN: The xFunc() will be executed, the TH1 notification will be ** skipped. ** ** TH_CONTINUE: The xFunc() will be skipped, the TH1 notification will be ** executed. */ if( !g.isHTTP && !g.fNoThHook ){ rc = Th_CommandHook(pCmd->zName, pCmd->eCmdFlags); }else{ rc = TH_OK; } if( rc==TH_OK || rc==TH_RETURN || rc==TH_CONTINUE ){ if( rc==TH_OK || rc==TH_RETURN ){ #endif pCmd->xFunc(); #ifdef FOSSIL_ENABLE_TH1_HOOKS } if( !g.isHTTP && !g.fNoThHook && (rc==TH_OK || rc==TH_CONTINUE) ){ Th_CommandNotify(pCmd->zName, pCmd->eCmdFlags); } } #endif fossil_exit(0); /*NOT_REACHED*/ return 0; } |
︙ | ︙ | |||
942 943 944 945 946 947 948 | } } } /* ** Print a list of words in multiple columns. */ | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | 889 890 891 892 893 894 895 896 897 898 899 900 901 902 | } } } /* ** Print a list of words in multiple columns. */ /* ** This function returns a human readable version string. */ const char *get_version(){ static const char version[] = RELEASE_VERSION " " MANIFEST_VERSION " " |
︙ | ︙ | |||
1141 1142 1143 1144 1145 1146 1147 | @ <blockquote><pre> @ %h(blob_str(&versionInfo)) @ </pre></blockquote> style_footer(); } | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 | @ <blockquote><pre> @ %h(blob_str(&versionInfo)) @ </pre></blockquote> style_footer(); } /* ** Set the g.zBaseURL value to the full URL for the toplevel of ** the fossil tree. Set g.zTop to g.zBaseURL without the ** leading "http://" and the host and port. ** ** The g.zBaseURL is normally set based on HTTP_HOST and SCRIPT_NAME ** environment variables. However, if zAltBase is not NULL then it |
︙ | ︙ | |||
1577 1578 1579 1580 1581 1582 1583 | static int repo_list_page(void){ Blob base; int n = 0; assert( g.db==0 ); blob_init(&base, g.zRepositoryName, -1); sqlite3_open(":memory:", &g.db); | | | | | | 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 | static int repo_list_page(void){ Blob base; int n = 0; assert( g.db==0 ); blob_init(&base, g.zRepositoryName, -1); sqlite3_open(":memory:", &g.db); db_multi_exec("CREATE TABLE sfile(pathname TEXT);"); db_multi_exec("CREATE TABLE vfile(pathname);"); vfile_scan(&base, blob_size(&base), 0, 0, 0); db_multi_exec("DELETE FROM sfile WHERE pathname NOT GLOB '*[^/].fossil'"); n = db_int(0, "SELECT count(*) FROM sfile"); if( n>0 ){ Stmt q; @ <html> @ <head> @ <base href="%s(g.zBaseURL)/" /> @ <title>Repository List</title> @ </head> @ <body> @ <h1>Available Repositories:</h1> @ <ol> db_prepare(&q, "SELECT pathname, substr(pathname,-7,-100000)||'/home'" " FROM sfile ORDER BY pathname COLLATE nocase;"); while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); const char *zUrl = db_column_text(&q, 1); @ <li><a href="%R/%h(zUrl)" target="_blank">%h(zName)</a></li> } @ </ol> @ </body> |
︙ | ︙ | |||
1639 1640 1641 1642 1643 1644 1645 | const char *zNotFound, /* Redirect here on a 404 if not NULL */ Glob *pFileGlob, /* Deliver static files matching */ int allowRepoList /* Send repo list for "/" URL */ ){ const char *zPathInfo; const char *zDirPathInfo; char *zPath = NULL; | < > | 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 | const char *zNotFound, /* Redirect here on a 404 if not NULL */ Glob *pFileGlob, /* Deliver static files matching */ int allowRepoList /* Send repo list for "/" URL */ ){ const char *zPathInfo; const char *zDirPathInfo; char *zPath = NULL; int i; const CmdOrPage *pCmd = 0; /* Handle universal query parameters */ if( PB("utc") ){ g.fTimeFormat = 1; }else if( PB("localtime") ){ g.fTimeFormat = 2; } |
︙ | ︙ | |||
1794 1795 1796 1797 1798 1799 1800 | zPath = mprintf("%s", zPathInfo); } /* Make g.zPath point to the first element of the path. Make ** g.zExtra point to everything past that point. */ while(1){ | < > > > > > > > | 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 | zPath = mprintf("%s", zPathInfo); } /* Make g.zPath point to the first element of the path. Make ** g.zExtra point to everything past that point. */ while(1){ g.zPath = &zPath[1]; for(i=1; zPath[i] && zPath[i]!='/'; i++){} if( zPath[i]=='/' ){ zPath[i] = 0; g.zExtra = &zPath[i+1]; #ifdef FOSSIL_ENABLE_SUBREPOSITORY char *zAltRepo = 0; /* 2016-09-21: Subrepos are undocumented and apparently no longer work. ** So they are now removed unless the -DFOSSIL_ENABLE_SUBREPOSITORY ** compile-time option is used. If there are no complaints after ** a while, we can delete the code entirely. */ /* Look for sub-repositories. A sub-repository is another repository ** that accepts the login credentials of the current repository. A ** subrepository is identified by a CONFIG table entry "subrepo:NAME" ** where NAME is the first component of the path. The value of the ** the CONFIG entries is the string "USER:FILENAME" where USER is the ** USER name to log in as in the subrepository and FILENAME is the ** repository filename. |
︙ | ︙ | |||
1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 | g.perm.Password = 0; zPath += i; nHost = g.zTop - g.zBaseURL; g.zBaseURL = mprintf("%z/%s", g.zBaseURL, g.zPath); g.zTop = g.zBaseURL + nHost; continue; } }else{ g.zExtra = 0; } break; } #ifdef FOSSIL_ENABLE_JSON /* | > | 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 | g.perm.Password = 0; zPath += i; nHost = g.zTop - g.zBaseURL; g.zBaseURL = mprintf("%z/%s", g.zBaseURL, g.zPath); g.zTop = g.zBaseURL + nHost; continue; } #endif /* FOSSIL_ENABLE_SUBREPOSITORY */ }else{ g.zExtra = 0; } break; } #ifdef FOSSIL_ENABLE_JSON /* |
︙ | ︙ | |||
1879 1880 1881 1882 1883 1884 1885 | } #endif } /* Locate the method specified by the path and execute the function ** that implements that method. */ | | | 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 | } #endif } /* Locate the method specified by the path and execute the function ** that implements that method. */ if( dispatch_name_search(g.zPath-1, CMDFLAG_WEBPAGE, &pCmd) ){ #ifdef FOSSIL_ENABLE_JSON if(g.json.isJsonMode){ json_err(FSL_JSON_E_RESOURCE_NOT_FOUND,NULL,0); }else #endif { #ifdef FOSSIL_ENABLE_TH1_HOOKS |
︙ | ︙ | |||
1907 1908 1909 1910 1911 1912 1913 | } if( !g.fNoThHook && (rc==TH_OK || rc==TH_CONTINUE) ){ Th_WebpageNotify(g.zPath, 0); } } #endif } | | | 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 | } if( !g.fNoThHook && (rc==TH_OK || rc==TH_CONTINUE) ){ Th_WebpageNotify(g.zPath, 0); } } #endif } }else if( pCmd->xFunc!=page_xfer && db_schema_is_outofdate() ){ #ifdef FOSSIL_ENABLE_JSON if(g.json.isJsonMode){ json_err(FSL_JSON_E_DB_NEEDS_REBUILD,NULL,0); }else #endif { @ <h1>Server Configuration Error</h1> |
︙ | ︙ | |||
1939 1940 1941 1942 1943 1944 1945 | ** skipped. ** ** TH_CONTINUE: The xFunc() will be skipped, the TH1 notification will be ** executed. */ int rc; if( !g.fNoThHook ){ | | | | | 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 | ** skipped. ** ** TH_CONTINUE: The xFunc() will be skipped, the TH1 notification will be ** executed. */ int rc; if( !g.fNoThHook ){ rc = Th_WebpageHook(pCmd->zName+1, pCmd->eCmdFlags); }else{ rc = TH_OK; } if( rc==TH_OK || rc==TH_RETURN || rc==TH_CONTINUE ){ if( rc==TH_OK || rc==TH_RETURN ){ #endif pCmd->xFunc(); #ifdef FOSSIL_ENABLE_TH1_HOOKS } if( !g.fNoThHook && (rc==TH_OK || rc==TH_CONTINUE) ){ Th_WebpageNotify(pCmd->zName+1, pCmd->eCmdFlags); } } #endif } /* Return the result. */ |
︙ | ︙ | |||
2016 2017 2018 2019 2020 2021 2022 | cgi_reply(); } } /* ** COMMAND: cgi* ** | | > > | < < > | | > > > > > > > | > > > > > > > | > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > | 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 | cgi_reply(); } } /* ** COMMAND: cgi* ** ** Usage: %fossil ?cgi? FILE ** ** This command causes Fossil to generate reply to a CGI request. ** ** The FILE argument is the name of a control file that provides Fossil ** with important information such as where to find its repository. In ** a typical CGI deployment, FILE is the name of the CGI script and will ** typically look something like this: ** ** #!/usr/bin/fossil ** repository: /home/somebody/project.db ** ** The command name, "cgi", may be omitted if the GATEWAY_INTERFACE ** environment variable is set to "CGI", which should always be the ** case for CGI scripts run by a webserver. Fossil ignores any lines ** that begin with "#". ** ** The following control lines are recognized: ** ** repository: PATH Name of the Fossil repository ** ** directory: PATH Name of a directory containing many Fossil ** repositories whose names all end with ".fossil". ** There should only be one of "repository:" ** or "directory:" ** ** notfound: URL When in "directory:" mode, redirect to ** URL if no suitable repository is found. ** ** repolist When in "directory:" mode, display a page ** showing a list of available repositories if ** the URL is "/". ** ** localauth Grant administrator privileges to connections ** from 127.0.0.1 or ::1. ** ** skin: LABEL Use the built-in skin called LABEL rather than ** the default. If there are no skins called LABEL ** then this line is a no-op. ** ** files: GLOBLIST GLOBLIST is a comma-separated list of GLOB ** patterns that specify files that can be ** returned verbatim. This feature allows Fossil ** to act as a web server returning static ** content. ** ** setenv: NAME VALUE Set environment variable NAME to VALUE. Or ** if VALUE is omitted, unset NAME. ** ** HOME: PATH Shorthand for "setenv: HOME PATH" ** ** debug: FILE Causing debugging information to be written ** into FILE. ** ** errorlog: FILE Warnings, errors, and panics written to FILE. ** ** redirect: REPO URL Extract the "name" query parameter and search ** REPO for a check-in or ticket that matches the ** value of "name", then redirect to URL. There ** can be multiple "redirect:" lines that are ** processed in order. If the REPO is "*", then ** an unconditional redirect to URL is taken. ** ** Most CGI files contain only a "repository:" line. It is uncommon to ** use any other option. ** ** See also: http, server, winsrv */ void cmd_cgi(void){ const char *zFile; const char *zNotFound = 0; char **azRedirect = 0; /* List of repositories to redirect to */ |
︙ | ︙ | |||
2134 2135 2136 2137 2138 2139 2140 | ** found it is returned verbatim. This feature allows "fossil server" ** to function as a primitive web-server delivering arbitrary content. */ pFileGlob = glob_create(blob_str(&value)); blob_reset(&value); continue; } | | < > | > > | 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 | ** found it is returned verbatim. This feature allows "fossil server" ** to function as a primitive web-server delivering arbitrary content. */ pFileGlob = glob_create(blob_str(&value)); blob_reset(&value); continue; } if( blob_eq(&key, "setenv:") && blob_token(&line, &value) ){ /* setenv: NAME VALUE ** setenv: NAME ** ** Sets environment variable NAME to VALUE. If VALUE is omitted, then ** the environment variable is unset. */ blob_token(&line,&value2); fossil_setenv(blob_str(&value), blob_str(&value2)); blob_reset(&value); blob_reset(&value2); continue; } if( blob_eq(&key, "debug:") && blob_token(&line, &value) ){ /* debug: FILENAME |
︙ | ︙ | |||
2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 | }else{ db_open_repository(zRepo); } } } } /* ** undocumented format: ** ** fossil http INFILE OUTFILE IPADDR ?REPOSITORY? ** ** The argv==6 form (with no options) is used by the win32 server only. ** | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 | }else{ db_open_repository(zRepo); } } } } #if defined(_WIN32) && USE_SEE /* ** This function attempts to parse a string value in the following ** format: ** ** "%lu:%p:%u" ** ** There are three parts, which must be delimited by colons. The ** first part is an unsigned long integer in base-10 (decimal) format. ** The second part is a numerical representation of a native pointer, ** in the appropriate implementation defined format. The third part ** is an unsigned integer in base-10 (decimal) format. ** ** If the specified value cannot be parsed, for any reason, a fatal ** error will be raised and the process will be terminated. */ void parse_pid_key_value( const char *zPidKey, /* The value to be parsed. */ DWORD *pProcessId, /* The extracted process identifier. */ LPVOID *ppAddress, /* The extracted pointer value. */ SIZE_T *pnSize /* The extracted size value. */ ){ unsigned int nSize = 0; if( sscanf(zPidKey, "%lu:%p:%u", pProcessId, ppAddress, &nSize)==3 ){ *pnSize = (SIZE_T)nSize; }else{ fossil_fatal("failed to parse pid key"); } } #endif /* ** undocumented format: ** ** fossil http INFILE OUTFILE IPADDR ?REPOSITORY? ** ** The argv==6 form (with no options) is used by the win32 server only. ** |
︙ | ︙ | |||
2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 | ** --https signal a request coming in via https ** --nojail drop root privilege but do not enter the chroot jail ** --nossl signal that no SSL connections are available ** --notfound URL use URL as "HTTP 404, object not found" page. ** --repolist If REPOSITORY is directory, URL "/" lists all repos ** --scgi Interpret input as SCGI rather than HTTP ** --skin LABEL Use override skin LABEL ** ** See also: cgi, server, winsrv */ void cmd_http(void){ const char *zIpAddr = 0; const char *zNotFound; const char *zHost; const char *zAltBase; const char *zFileGlob; int useSCGI; int noJail; int allowRepoList; /* The winhttp module passes the --files option as --files-urlenc with ** the argument being URL encoded, to avoid wildcard expansion in the ** shell. This option is for internal use and is undocumented. */ zFileGlob = find_option("files-urlenc",0,1); if( zFileGlob ){ | > > > > > > > > | 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 | ** --https signal a request coming in via https ** --nojail drop root privilege but do not enter the chroot jail ** --nossl signal that no SSL connections are available ** --notfound URL use URL as "HTTP 404, object not found" page. ** --repolist If REPOSITORY is directory, URL "/" lists all repos ** --scgi Interpret input as SCGI rather than HTTP ** --skin LABEL Use override skin LABEL ** --th-trace trace TH1 execution (for debugging purposes) ** --usepidkey Use saved encryption key from parent process. This is ** only necessary when using SEE on Windows. ** ** See also: cgi, server, winsrv */ void cmd_http(void){ const char *zIpAddr = 0; const char *zNotFound; const char *zHost; const char *zAltBase; const char *zFileGlob; int useSCGI; int noJail; int allowRepoList; #if defined(_WIN32) && USE_SEE const char *zPidKey; #endif Th_InitTraceLog(); /* The winhttp module passes the --files option as --files-urlenc with ** the argument being URL encoded, to avoid wildcard expansion in the ** shell. This option is for internal use and is undocumented. */ zFileGlob = find_option("files-urlenc",0,1); if( zFileGlob ){ |
︙ | ︙ | |||
2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 | if( zAltBase ) set_base_url(zAltBase); if( find_option("https",0,0)!=0 ){ zIpAddr = fossil_getenv("REMOTE_HOST"); /* From stunnel */ cgi_replace_parameter("HTTPS","on"); } zHost = find_option("host", 0, 1); if( zHost ) cgi_replace_parameter("HTTP_HOST",zHost); /* We should be done with options.. */ verify_all_options(); if( g.argc!=2 && g.argc!=3 && g.argc!=5 && g.argc!=6 ){ fossil_fatal("no repository specified"); } | > > > > > > > > > > > | 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 | if( zAltBase ) set_base_url(zAltBase); if( find_option("https",0,0)!=0 ){ zIpAddr = fossil_getenv("REMOTE_HOST"); /* From stunnel */ cgi_replace_parameter("HTTPS","on"); } zHost = find_option("host", 0, 1); if( zHost ) cgi_replace_parameter("HTTP_HOST",zHost); #if defined(_WIN32) && USE_SEE zPidKey = find_option("usepidkey", 0, 1); if( zPidKey ){ DWORD processId = 0; LPVOID pAddress = NULL; SIZE_T nSize = 0; parse_pid_key_value(zPidKey, &processId, &pAddress, &nSize); db_read_saved_encryption_key_from_process(processId, pAddress, nSize); } #endif /* We should be done with options.. */ verify_all_options(); if( g.argc!=2 && g.argc!=3 && g.argc!=5 && g.argc!=6 ){ fossil_fatal("no repository specified"); } |
︙ | ︙ | |||
2501 2502 2503 2504 2505 2506 2507 | ** --nossl signal that no SSL connections are available ** --notfound URL Redirect ** -P|--port TCPPORT listen to request on port TCPPORT ** --th-trace trace TH1 execution (for debugging purposes) ** --repolist If REPOSITORY is dir, URL "/" lists repos. ** --scgi Accept SCGI rather than HTTP ** --skin LABEL Use override skin LABEL | | > > > > | 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 | ** --nossl signal that no SSL connections are available ** --notfound URL Redirect ** -P|--port TCPPORT listen to request on port TCPPORT ** --th-trace trace TH1 execution (for debugging purposes) ** --repolist If REPOSITORY is dir, URL "/" lists repos. ** --scgi Accept SCGI rather than HTTP ** --skin LABEL Use override skin LABEL ** --usepidkey Use saved encryption key from parent process. This is ** only necessary when using SEE on Windows. ** ** See also: cgi, http, winsrv */ void cmd_webserver(void){ int iPort, mxPort; /* Range of TCP ports allowed */ const char *zPort; /* Value of the --port option */ const char *zBrowser; /* Name of web browser program */ char *zBrowserCmd = 0; /* Command to launch the web browser */ int isUiCmd; /* True if command is "ui", not "server' */ const char *zNotFound; /* The --notfound option or NULL */ int flags = 0; /* Server flags */ #if !defined(_WIN32) int noJail; /* Do not enter the chroot jail */ #endif int allowRepoList; /* List repositories on URL "/" */ const char *zAltBase; /* Argument to the --baseurl option */ const char *zFileGlob; /* Static content must match this */ char *zIpAddr = 0; /* Bind to this IP address */ int fCreate = 0; /* The --create flag */ const char *zInitPage = 0; /* Start on this page. --page option */ #if defined(_WIN32) && USE_SEE const char *zPidKey; #endif #if defined(_WIN32) const char *zStopperFile; /* Name of file used to terminate server */ zStopperFile = find_option("stopper", 0, 1); #endif zFileGlob = find_option("files-urlenc",0,1); |
︙ | ︙ | |||
2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 2576 2577 2578 | }else{ /* without --https, defaults to not available. */ g.sslNotAvailable = 1; } if( find_option("localhost", 0, 0)!=0 ){ flags |= HTTP_SERVER_LOCALHOST; } /* We should be done with options.. */ verify_all_options(); if( g.argc!=2 && g.argc!=3 ) usage("?REPOSITORY?"); if( isUiCmd ){ flags |= HTTP_SERVER_LOCALHOST|HTTP_SERVER_REPOLIST; | > > > > > > > > > > > | 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 | }else{ /* without --https, defaults to not available. */ g.sslNotAvailable = 1; } if( find_option("localhost", 0, 0)!=0 ){ flags |= HTTP_SERVER_LOCALHOST; } #if defined(_WIN32) && USE_SEE zPidKey = find_option("usepidkey", 0, 1); if( zPidKey ){ DWORD processId = 0; LPVOID pAddress = NULL; SIZE_T nSize = 0; parse_pid_key_value(zPidKey, &processId, &pAddress, &nSize); db_read_saved_encryption_key_from_process(processId, pAddress, nSize); } #endif /* We should be done with options.. */ verify_all_options(); if( g.argc!=2 && g.argc!=3 ) usage("?REPOSITORY?"); if( isUiCmd ){ flags |= HTTP_SERVER_LOCALHOST|HTTP_SERVER_REPOLIST; |
︙ | ︙ | |||
2605 2606 2607 2608 2609 2610 2611 | #if !defined(__DARWIN__) && !defined(__APPLE__) && !defined(__HAIKU__) zBrowser = db_get("web-browser", 0); if( zBrowser==0 ){ static const char *const azBrowserProg[] = { "xdg-open", "gnome-open", "firefox", "google-chrome" }; int i; zBrowser = "echo"; | | | 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 | #if !defined(__DARWIN__) && !defined(__APPLE__) && !defined(__HAIKU__) zBrowser = db_get("web-browser", 0); if( zBrowser==0 ){ static const char *const azBrowserProg[] = { "xdg-open", "gnome-open", "firefox", "google-chrome" }; int i; zBrowser = "echo"; for(i=0; i<count(azBrowserProg); i++){ if( binaryOnPath(azBrowserProg[i]) ){ zBrowser = azBrowserProg[i]; break; } } } #else |
︙ | ︙ |
Changes to src/main.mk.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | # ############################################################################## # WARNING: DO NOT EDIT, AUTOMATICALLY GENERATED FILE (SEE "src/makemake.tcl") ############################################################################## # # This file is automatically generated. Instead of editing this # file, edit "makemake.tcl" then run "tclsh makemake.tcl" # to regenerate this file. # # This file is included by primary Makefile. # XTCC = $(TCC) -I. -I$(SRCDIR) -I$(OBJDIR) $(TCCFLAGS) $(CFLAGS) SRC = \ $(SRCDIR)/add.c \ $(SRCDIR)/allrepo.c \ $(SRCDIR)/attach.c \ | > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | # ############################################################################## # WARNING: DO NOT EDIT, AUTOMATICALLY GENERATED FILE (SEE "src/makemake.tcl") ############################################################################## # # This file is automatically generated. Instead of editing this # file, edit "makemake.tcl" then run "tclsh makemake.tcl" # to regenerate this file. # # This file is included by primary Makefile. # XBCC = $(BCC) $(BCCFLAGS) $(CFLAGS) XTCC = $(TCC) -I. -I$(SRCDIR) -I$(OBJDIR) $(TCCFLAGS) $(CFLAGS) SRC = \ $(SRCDIR)/add.c \ $(SRCDIR)/allrepo.c \ $(SRCDIR)/attach.c \ |
︙ | ︙ | |||
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 | $(SRCDIR)/content.c \ $(SRCDIR)/db.c \ $(SRCDIR)/delta.c \ $(SRCDIR)/deltacmd.c \ $(SRCDIR)/descendants.c \ $(SRCDIR)/diff.c \ $(SRCDIR)/diffcmd.c \ $(SRCDIR)/doc.c \ $(SRCDIR)/encode.c \ $(SRCDIR)/event.c \ $(SRCDIR)/export.c \ $(SRCDIR)/file.c \ $(SRCDIR)/finfo.c \ $(SRCDIR)/foci.c \ $(SRCDIR)/fusefs.c \ $(SRCDIR)/glob.c \ $(SRCDIR)/graph.c \ $(SRCDIR)/gzip.c \ $(SRCDIR)/http.c \ $(SRCDIR)/http_socket.c \ $(SRCDIR)/http_ssl.c \ | > > | 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 | $(SRCDIR)/content.c \ $(SRCDIR)/db.c \ $(SRCDIR)/delta.c \ $(SRCDIR)/deltacmd.c \ $(SRCDIR)/descendants.c \ $(SRCDIR)/diff.c \ $(SRCDIR)/diffcmd.c \ $(SRCDIR)/dispatch.c \ $(SRCDIR)/doc.c \ $(SRCDIR)/encode.c \ $(SRCDIR)/event.c \ $(SRCDIR)/export.c \ $(SRCDIR)/file.c \ $(SRCDIR)/finfo.c \ $(SRCDIR)/foci.c \ $(SRCDIR)/fshell.c \ $(SRCDIR)/fusefs.c \ $(SRCDIR)/glob.c \ $(SRCDIR)/graph.c \ $(SRCDIR)/gzip.c \ $(SRCDIR)/http.c \ $(SRCDIR)/http_socket.c \ $(SRCDIR)/http_ssl.c \ |
︙ | ︙ | |||
114 115 116 117 118 119 120 121 122 123 124 125 126 127 | $(SRCDIR)/tar.c \ $(SRCDIR)/th_main.c \ $(SRCDIR)/timeline.c \ $(SRCDIR)/tkt.c \ $(SRCDIR)/tktsetup.c \ $(SRCDIR)/undo.c \ $(SRCDIR)/unicode.c \ $(SRCDIR)/update.c \ $(SRCDIR)/url.c \ $(SRCDIR)/user.c \ $(SRCDIR)/utf8.c \ $(SRCDIR)/util.c \ $(SRCDIR)/verify.c \ $(SRCDIR)/vfile.c \ | > | 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 | $(SRCDIR)/tar.c \ $(SRCDIR)/th_main.c \ $(SRCDIR)/timeline.c \ $(SRCDIR)/tkt.c \ $(SRCDIR)/tktsetup.c \ $(SRCDIR)/undo.c \ $(SRCDIR)/unicode.c \ $(SRCDIR)/unversioned.c \ $(SRCDIR)/update.c \ $(SRCDIR)/url.c \ $(SRCDIR)/user.c \ $(SRCDIR)/utf8.c \ $(SRCDIR)/util.c \ $(SRCDIR)/verify.c \ $(SRCDIR)/vfile.c \ |
︙ | ︙ | |||
208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 | $(OBJDIR)/content_.c \ $(OBJDIR)/db_.c \ $(OBJDIR)/delta_.c \ $(OBJDIR)/deltacmd_.c \ $(OBJDIR)/descendants_.c \ $(OBJDIR)/diff_.c \ $(OBJDIR)/diffcmd_.c \ $(OBJDIR)/doc_.c \ $(OBJDIR)/encode_.c \ $(OBJDIR)/event_.c \ $(OBJDIR)/export_.c \ $(OBJDIR)/file_.c \ $(OBJDIR)/finfo_.c \ $(OBJDIR)/foci_.c \ $(OBJDIR)/fusefs_.c \ $(OBJDIR)/glob_.c \ $(OBJDIR)/graph_.c \ $(OBJDIR)/gzip_.c \ $(OBJDIR)/http_.c \ $(OBJDIR)/http_socket_.c \ $(OBJDIR)/http_ssl_.c \ | > > | 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 | $(OBJDIR)/content_.c \ $(OBJDIR)/db_.c \ $(OBJDIR)/delta_.c \ $(OBJDIR)/deltacmd_.c \ $(OBJDIR)/descendants_.c \ $(OBJDIR)/diff_.c \ $(OBJDIR)/diffcmd_.c \ $(OBJDIR)/dispatch_.c \ $(OBJDIR)/doc_.c \ $(OBJDIR)/encode_.c \ $(OBJDIR)/event_.c \ $(OBJDIR)/export_.c \ $(OBJDIR)/file_.c \ $(OBJDIR)/finfo_.c \ $(OBJDIR)/foci_.c \ $(OBJDIR)/fshell_.c \ $(OBJDIR)/fusefs_.c \ $(OBJDIR)/glob_.c \ $(OBJDIR)/graph_.c \ $(OBJDIR)/gzip_.c \ $(OBJDIR)/http_.c \ $(OBJDIR)/http_socket_.c \ $(OBJDIR)/http_ssl_.c \ |
︙ | ︙ | |||
286 287 288 289 290 291 292 293 294 295 296 297 298 299 | $(OBJDIR)/tar_.c \ $(OBJDIR)/th_main_.c \ $(OBJDIR)/timeline_.c \ $(OBJDIR)/tkt_.c \ $(OBJDIR)/tktsetup_.c \ $(OBJDIR)/undo_.c \ $(OBJDIR)/unicode_.c \ $(OBJDIR)/update_.c \ $(OBJDIR)/url_.c \ $(OBJDIR)/user_.c \ $(OBJDIR)/utf8_.c \ $(OBJDIR)/util_.c \ $(OBJDIR)/verify_.c \ $(OBJDIR)/vfile_.c \ | > | 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 | $(OBJDIR)/tar_.c \ $(OBJDIR)/th_main_.c \ $(OBJDIR)/timeline_.c \ $(OBJDIR)/tkt_.c \ $(OBJDIR)/tktsetup_.c \ $(OBJDIR)/undo_.c \ $(OBJDIR)/unicode_.c \ $(OBJDIR)/unversioned_.c \ $(OBJDIR)/update_.c \ $(OBJDIR)/url_.c \ $(OBJDIR)/user_.c \ $(OBJDIR)/utf8_.c \ $(OBJDIR)/util_.c \ $(OBJDIR)/verify_.c \ $(OBJDIR)/vfile_.c \ |
︙ | ︙ | |||
329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 | $(OBJDIR)/content.o \ $(OBJDIR)/db.o \ $(OBJDIR)/delta.o \ $(OBJDIR)/deltacmd.o \ $(OBJDIR)/descendants.o \ $(OBJDIR)/diff.o \ $(OBJDIR)/diffcmd.o \ $(OBJDIR)/doc.o \ $(OBJDIR)/encode.o \ $(OBJDIR)/event.o \ $(OBJDIR)/export.o \ $(OBJDIR)/file.o \ $(OBJDIR)/finfo.o \ $(OBJDIR)/foci.o \ $(OBJDIR)/fusefs.o \ $(OBJDIR)/glob.o \ $(OBJDIR)/graph.o \ $(OBJDIR)/gzip.o \ $(OBJDIR)/http.o \ $(OBJDIR)/http_socket.o \ $(OBJDIR)/http_ssl.o \ | > > | 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 | $(OBJDIR)/content.o \ $(OBJDIR)/db.o \ $(OBJDIR)/delta.o \ $(OBJDIR)/deltacmd.o \ $(OBJDIR)/descendants.o \ $(OBJDIR)/diff.o \ $(OBJDIR)/diffcmd.o \ $(OBJDIR)/dispatch.o \ $(OBJDIR)/doc.o \ $(OBJDIR)/encode.o \ $(OBJDIR)/event.o \ $(OBJDIR)/export.o \ $(OBJDIR)/file.o \ $(OBJDIR)/finfo.o \ $(OBJDIR)/foci.o \ $(OBJDIR)/fshell.o \ $(OBJDIR)/fusefs.o \ $(OBJDIR)/glob.o \ $(OBJDIR)/graph.o \ $(OBJDIR)/gzip.o \ $(OBJDIR)/http.o \ $(OBJDIR)/http_socket.o \ $(OBJDIR)/http_ssl.o \ |
︙ | ︙ | |||
407 408 409 410 411 412 413 414 415 416 417 418 419 420 | $(OBJDIR)/tar.o \ $(OBJDIR)/th_main.o \ $(OBJDIR)/timeline.o \ $(OBJDIR)/tkt.o \ $(OBJDIR)/tktsetup.o \ $(OBJDIR)/undo.o \ $(OBJDIR)/unicode.o \ $(OBJDIR)/update.o \ $(OBJDIR)/url.o \ $(OBJDIR)/user.o \ $(OBJDIR)/utf8.o \ $(OBJDIR)/util.o \ $(OBJDIR)/verify.o \ $(OBJDIR)/vfile.o \ | > | 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 | $(OBJDIR)/tar.o \ $(OBJDIR)/th_main.o \ $(OBJDIR)/timeline.o \ $(OBJDIR)/tkt.o \ $(OBJDIR)/tktsetup.o \ $(OBJDIR)/undo.o \ $(OBJDIR)/unicode.o \ $(OBJDIR)/unversioned.o \ $(OBJDIR)/update.o \ $(OBJDIR)/url.o \ $(OBJDIR)/user.o \ $(OBJDIR)/utf8.o \ $(OBJDIR)/util.o \ $(OBJDIR)/verify.o \ $(OBJDIR)/vfile.o \ |
︙ | ︙ | |||
440 441 442 443 444 445 446 | codecheck: $(TRANS_SRC) $(OBJDIR)/codecheck1 $(OBJDIR)/codecheck1 $(TRANS_SRC) $(OBJDIR): -mkdir $(OBJDIR) $(OBJDIR)/translate: $(SRCDIR)/translate.c | | | | | | | | 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 | codecheck: $(TRANS_SRC) $(OBJDIR)/codecheck1 $(OBJDIR)/codecheck1 $(TRANS_SRC) $(OBJDIR): -mkdir $(OBJDIR) $(OBJDIR)/translate: $(SRCDIR)/translate.c $(XBCC) -o $(OBJDIR)/translate $(SRCDIR)/translate.c $(OBJDIR)/makeheaders: $(SRCDIR)/makeheaders.c $(XBCC) -o $(OBJDIR)/makeheaders $(SRCDIR)/makeheaders.c $(OBJDIR)/mkindex: $(SRCDIR)/mkindex.c $(XBCC) -o $(OBJDIR)/mkindex $(SRCDIR)/mkindex.c $(OBJDIR)/mkbuiltin: $(SRCDIR)/mkbuiltin.c $(XBCC) -o $(OBJDIR)/mkbuiltin $(SRCDIR)/mkbuiltin.c $(OBJDIR)/mkversion: $(SRCDIR)/mkversion.c $(XBCC) -o $(OBJDIR)/mkversion $(SRCDIR)/mkversion.c $(OBJDIR)/codecheck1: $(SRCDIR)/codecheck1.c $(XBCC) -o $(OBJDIR)/codecheck1 $(SRCDIR)/codecheck1.c # Run the test suite. # Other flags that can be included in TESTFLAGS are: # # -halt Stop testing after the first failed test # -keep Keep the temporary workspace for debugging # -prot Write a detailed log of the tests to the file ./prot |
︙ | ︙ | |||
478 479 480 481 482 483 484 | $(TCLSH) $(SRCDIR)/../test/tester.tcl $(APPNAME) -quiet $(TESTFLAGS) $(OBJDIR)/VERSION.h: $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest $(SRCDIR)/../VERSION $(OBJDIR)/mkversion $(OBJDIR)/mkversion $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest $(SRCDIR)/../VERSION >$(OBJDIR)/VERSION.h # Setup the options used to compile the included SQLite library. SQLITE_OPTIONS = -DNDEBUG=1 \ | > > > > > > > > | > > < < | 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 | $(TCLSH) $(SRCDIR)/../test/tester.tcl $(APPNAME) -quiet $(TESTFLAGS) $(OBJDIR)/VERSION.h: $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest $(SRCDIR)/../VERSION $(OBJDIR)/mkversion $(OBJDIR)/mkversion $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest $(SRCDIR)/../VERSION >$(OBJDIR)/VERSION.h # Setup the options used to compile the included SQLite library. SQLITE_OPTIONS = -DNDEBUG=1 \ -DSQLITE_THREADSAFE=0 \ -DSQLITE_DEFAULT_MEMSTATUS=0 \ -DSQLITE_DEFAULT_WAL_SYNCHRONOUS=1 \ -DSQLITE_LIKE_DOESNT_MATCH_BLOBS \ -DSQLITE_OMIT_DECLTYPE \ -DSQLITE_OMIT_DEPRECATED \ -DSQLITE_OMIT_PROGRESS_CALLBACK \ -DSQLITE_OMIT_SHARED_CACHE \ -DSQLITE_OMIT_LOAD_EXTENSION \ -DSQLITE_MAX_EXPR_DEPTH=0 \ -DSQLITE_USE_ALLOCA \ -DSQLITE_ENABLE_LOCKING_STYLE=0 \ -DSQLITE_DEFAULT_FILE_FORMAT=4 \ -DSQLITE_ENABLE_EXPLAIN_COMMENTS \ -DSQLITE_ENABLE_FTS4 \ -DSQLITE_ENABLE_FTS3_PARENTHESIS \ -DSQLITE_ENABLE_DBSTAT_VTAB \ -DSQLITE_ENABLE_JSON1 \ -DSQLITE_ENABLE_FTS5 |
︙ | ︙ | |||
535 536 537 538 539 540 541 542 543 544 545 546 547 548 | # 0, ordinary SQLite is used. If 1, then sqlite3-see.c (not part of # the source tree) is used and extra flags are provided to enable # the SQLite Encryption Extension. SQLITE3_SRC.0 = sqlite3.c SQLITE3_SRC.1 = sqlite3-see.c SQLITE3_SRC. = sqlite3.c SQLITE3_SRC = $(SRCDIR)/$(SQLITE3_SRC.$(USE_SEE)) SEE_FLAGS.0 = SEE_FLAGS.1 = -DSQLITE_HAS_CODEC SEE_FLAGS. = SEE_FLAGS = $(SEE_FLAGS.$(USE_SEE)) EXTRAOBJ = \ | > > > > | 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 | # 0, ordinary SQLite is used. If 1, then sqlite3-see.c (not part of # the source tree) is used and extra flags are provided to enable # the SQLite Encryption Extension. SQLITE3_SRC.0 = sqlite3.c SQLITE3_SRC.1 = sqlite3-see.c SQLITE3_SRC. = sqlite3.c SQLITE3_SRC = $(SRCDIR)/$(SQLITE3_SRC.$(USE_SEE)) SQLITE3_SHELL_SRC.0 = shell.c SQLITE3_SHELL_SRC.1 = shell-see.c SQLITE3_SHELL_SRC. = shell.c SQLITE3_SHELL_SRC = $(SRCDIR)/$(SQLITE3_SHELL_SRC.$(USE_SEE)) SEE_FLAGS.0 = SEE_FLAGS.1 = -DSQLITE_HAS_CODEC SEE_FLAGS. = SEE_FLAGS = $(SEE_FLAGS.$(USE_SEE)) EXTRAOBJ = \ |
︙ | ︙ | |||
599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 | $(OBJDIR)/content_.c:$(OBJDIR)/content.h \ $(OBJDIR)/db_.c:$(OBJDIR)/db.h \ $(OBJDIR)/delta_.c:$(OBJDIR)/delta.h \ $(OBJDIR)/deltacmd_.c:$(OBJDIR)/deltacmd.h \ $(OBJDIR)/descendants_.c:$(OBJDIR)/descendants.h \ $(OBJDIR)/diff_.c:$(OBJDIR)/diff.h \ $(OBJDIR)/diffcmd_.c:$(OBJDIR)/diffcmd.h \ $(OBJDIR)/doc_.c:$(OBJDIR)/doc.h \ $(OBJDIR)/encode_.c:$(OBJDIR)/encode.h \ $(OBJDIR)/event_.c:$(OBJDIR)/event.h \ $(OBJDIR)/export_.c:$(OBJDIR)/export.h \ $(OBJDIR)/file_.c:$(OBJDIR)/file.h \ $(OBJDIR)/finfo_.c:$(OBJDIR)/finfo.h \ $(OBJDIR)/foci_.c:$(OBJDIR)/foci.h \ $(OBJDIR)/fusefs_.c:$(OBJDIR)/fusefs.h \ $(OBJDIR)/glob_.c:$(OBJDIR)/glob.h \ $(OBJDIR)/graph_.c:$(OBJDIR)/graph.h \ $(OBJDIR)/gzip_.c:$(OBJDIR)/gzip.h \ $(OBJDIR)/http_.c:$(OBJDIR)/http.h \ $(OBJDIR)/http_socket_.c:$(OBJDIR)/http_socket.h \ $(OBJDIR)/http_ssl_.c:$(OBJDIR)/http_ssl.h \ | > > | 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 | $(OBJDIR)/content_.c:$(OBJDIR)/content.h \ $(OBJDIR)/db_.c:$(OBJDIR)/db.h \ $(OBJDIR)/delta_.c:$(OBJDIR)/delta.h \ $(OBJDIR)/deltacmd_.c:$(OBJDIR)/deltacmd.h \ $(OBJDIR)/descendants_.c:$(OBJDIR)/descendants.h \ $(OBJDIR)/diff_.c:$(OBJDIR)/diff.h \ $(OBJDIR)/diffcmd_.c:$(OBJDIR)/diffcmd.h \ $(OBJDIR)/dispatch_.c:$(OBJDIR)/dispatch.h \ $(OBJDIR)/doc_.c:$(OBJDIR)/doc.h \ $(OBJDIR)/encode_.c:$(OBJDIR)/encode.h \ $(OBJDIR)/event_.c:$(OBJDIR)/event.h \ $(OBJDIR)/export_.c:$(OBJDIR)/export.h \ $(OBJDIR)/file_.c:$(OBJDIR)/file.h \ $(OBJDIR)/finfo_.c:$(OBJDIR)/finfo.h \ $(OBJDIR)/foci_.c:$(OBJDIR)/foci.h \ $(OBJDIR)/fshell_.c:$(OBJDIR)/fshell.h \ $(OBJDIR)/fusefs_.c:$(OBJDIR)/fusefs.h \ $(OBJDIR)/glob_.c:$(OBJDIR)/glob.h \ $(OBJDIR)/graph_.c:$(OBJDIR)/graph.h \ $(OBJDIR)/gzip_.c:$(OBJDIR)/gzip.h \ $(OBJDIR)/http_.c:$(OBJDIR)/http.h \ $(OBJDIR)/http_socket_.c:$(OBJDIR)/http_socket.h \ $(OBJDIR)/http_ssl_.c:$(OBJDIR)/http_ssl.h \ |
︙ | ︙ | |||
677 678 679 680 681 682 683 684 685 686 687 688 689 690 | $(OBJDIR)/tar_.c:$(OBJDIR)/tar.h \ $(OBJDIR)/th_main_.c:$(OBJDIR)/th_main.h \ $(OBJDIR)/timeline_.c:$(OBJDIR)/timeline.h \ $(OBJDIR)/tkt_.c:$(OBJDIR)/tkt.h \ $(OBJDIR)/tktsetup_.c:$(OBJDIR)/tktsetup.h \ $(OBJDIR)/undo_.c:$(OBJDIR)/undo.h \ $(OBJDIR)/unicode_.c:$(OBJDIR)/unicode.h \ $(OBJDIR)/update_.c:$(OBJDIR)/update.h \ $(OBJDIR)/url_.c:$(OBJDIR)/url.h \ $(OBJDIR)/user_.c:$(OBJDIR)/user.h \ $(OBJDIR)/utf8_.c:$(OBJDIR)/utf8.h \ $(OBJDIR)/util_.c:$(OBJDIR)/util.h \ $(OBJDIR)/verify_.c:$(OBJDIR)/verify.h \ $(OBJDIR)/vfile_.c:$(OBJDIR)/vfile.h \ | > | 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 | $(OBJDIR)/tar_.c:$(OBJDIR)/tar.h \ $(OBJDIR)/th_main_.c:$(OBJDIR)/th_main.h \ $(OBJDIR)/timeline_.c:$(OBJDIR)/timeline.h \ $(OBJDIR)/tkt_.c:$(OBJDIR)/tkt.h \ $(OBJDIR)/tktsetup_.c:$(OBJDIR)/tktsetup.h \ $(OBJDIR)/undo_.c:$(OBJDIR)/undo.h \ $(OBJDIR)/unicode_.c:$(OBJDIR)/unicode.h \ $(OBJDIR)/unversioned_.c:$(OBJDIR)/unversioned.h \ $(OBJDIR)/update_.c:$(OBJDIR)/update.h \ $(OBJDIR)/url_.c:$(OBJDIR)/url.h \ $(OBJDIR)/user_.c:$(OBJDIR)/user.h \ $(OBJDIR)/utf8_.c:$(OBJDIR)/utf8.h \ $(OBJDIR)/util_.c:$(OBJDIR)/util.h \ $(OBJDIR)/verify_.c:$(OBJDIR)/verify.h \ $(OBJDIR)/vfile_.c:$(OBJDIR)/vfile.h \ |
︙ | ︙ | |||
906 907 908 909 910 911 912 913 914 915 916 917 918 919 | $(OBJDIR)/diffcmd_.c: $(SRCDIR)/diffcmd.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/diffcmd.c >$@ $(OBJDIR)/diffcmd.o: $(OBJDIR)/diffcmd_.c $(OBJDIR)/diffcmd.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/diffcmd.o -c $(OBJDIR)/diffcmd_.c $(OBJDIR)/diffcmd.h: $(OBJDIR)/headers $(OBJDIR)/doc_.c: $(SRCDIR)/doc.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/doc.c >$@ $(OBJDIR)/doc.o: $(OBJDIR)/doc_.c $(OBJDIR)/doc.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/doc.o -c $(OBJDIR)/doc_.c | > > > > > > > > | 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 | $(OBJDIR)/diffcmd_.c: $(SRCDIR)/diffcmd.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/diffcmd.c >$@ $(OBJDIR)/diffcmd.o: $(OBJDIR)/diffcmd_.c $(OBJDIR)/diffcmd.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/diffcmd.o -c $(OBJDIR)/diffcmd_.c $(OBJDIR)/diffcmd.h: $(OBJDIR)/headers $(OBJDIR)/dispatch_.c: $(SRCDIR)/dispatch.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/dispatch.c >$@ $(OBJDIR)/dispatch.o: $(OBJDIR)/dispatch_.c $(OBJDIR)/dispatch.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/dispatch.o -c $(OBJDIR)/dispatch_.c $(OBJDIR)/dispatch.h: $(OBJDIR)/headers $(OBJDIR)/doc_.c: $(SRCDIR)/doc.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/doc.c >$@ $(OBJDIR)/doc.o: $(OBJDIR)/doc_.c $(OBJDIR)/doc.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/doc.o -c $(OBJDIR)/doc_.c |
︙ | ︙ | |||
962 963 964 965 966 967 968 969 970 971 972 973 974 975 | $(OBJDIR)/foci_.c: $(SRCDIR)/foci.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/foci.c >$@ $(OBJDIR)/foci.o: $(OBJDIR)/foci_.c $(OBJDIR)/foci.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/foci.o -c $(OBJDIR)/foci_.c $(OBJDIR)/foci.h: $(OBJDIR)/headers $(OBJDIR)/fusefs_.c: $(SRCDIR)/fusefs.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/fusefs.c >$@ $(OBJDIR)/fusefs.o: $(OBJDIR)/fusefs_.c $(OBJDIR)/fusefs.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/fusefs.o -c $(OBJDIR)/fusefs_.c | > > > > > > > > | 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 | $(OBJDIR)/foci_.c: $(SRCDIR)/foci.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/foci.c >$@ $(OBJDIR)/foci.o: $(OBJDIR)/foci_.c $(OBJDIR)/foci.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/foci.o -c $(OBJDIR)/foci_.c $(OBJDIR)/foci.h: $(OBJDIR)/headers $(OBJDIR)/fshell_.c: $(SRCDIR)/fshell.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/fshell.c >$@ $(OBJDIR)/fshell.o: $(OBJDIR)/fshell_.c $(OBJDIR)/fshell.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/fshell.o -c $(OBJDIR)/fshell_.c $(OBJDIR)/fshell.h: $(OBJDIR)/headers $(OBJDIR)/fusefs_.c: $(SRCDIR)/fusefs.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/fusefs.c >$@ $(OBJDIR)/fusefs.o: $(OBJDIR)/fusefs_.c $(OBJDIR)/fusefs.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/fusefs.o -c $(OBJDIR)/fusefs_.c |
︙ | ︙ | |||
1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 | $(OBJDIR)/unicode_.c: $(SRCDIR)/unicode.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/unicode.c >$@ $(OBJDIR)/unicode.o: $(OBJDIR)/unicode_.c $(OBJDIR)/unicode.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/unicode.o -c $(OBJDIR)/unicode_.c $(OBJDIR)/unicode.h: $(OBJDIR)/headers $(OBJDIR)/update_.c: $(SRCDIR)/update.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/update.c >$@ $(OBJDIR)/update.o: $(OBJDIR)/update_.c $(OBJDIR)/update.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/update.o -c $(OBJDIR)/update_.c | > > > > > > > > | 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 | $(OBJDIR)/unicode_.c: $(SRCDIR)/unicode.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/unicode.c >$@ $(OBJDIR)/unicode.o: $(OBJDIR)/unicode_.c $(OBJDIR)/unicode.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/unicode.o -c $(OBJDIR)/unicode_.c $(OBJDIR)/unicode.h: $(OBJDIR)/headers $(OBJDIR)/unversioned_.c: $(SRCDIR)/unversioned.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/unversioned.c >$@ $(OBJDIR)/unversioned.o: $(OBJDIR)/unversioned_.c $(OBJDIR)/unversioned.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/unversioned.o -c $(OBJDIR)/unversioned_.c $(OBJDIR)/unversioned.h: $(OBJDIR)/headers $(OBJDIR)/update_.c: $(SRCDIR)/update.c $(OBJDIR)/translate $(OBJDIR)/translate $(SRCDIR)/update.c >$@ $(OBJDIR)/update.o: $(OBJDIR)/update_.c $(OBJDIR)/update.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/update.o -c $(OBJDIR)/update_.c |
︙ | ︙ | |||
1654 1655 1656 1657 1658 1659 1660 | $(XTCC) -o $(OBJDIR)/zip.o -c $(OBJDIR)/zip_.c $(OBJDIR)/zip.h: $(OBJDIR)/headers $(OBJDIR)/sqlite3.o: $(SQLITE3_SRC) $(XTCC) $(SQLITE_OPTIONS) $(SQLITE_CFLAGS) $(SEE_FLAGS) \ -c $(SQLITE3_SRC) -o $@ | | | | 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 | $(XTCC) -o $(OBJDIR)/zip.o -c $(OBJDIR)/zip_.c $(OBJDIR)/zip.h: $(OBJDIR)/headers $(OBJDIR)/sqlite3.o: $(SQLITE3_SRC) $(XTCC) $(SQLITE_OPTIONS) $(SQLITE_CFLAGS) $(SEE_FLAGS) \ -c $(SQLITE3_SRC) -o $@ $(OBJDIR)/shell.o: $(SQLITE3_SHELL_SRC) $(SRCDIR)/sqlite3.h $(XTCC) $(SHELL_OPTIONS) $(SHELL_CFLAGS) $(LINENOISE_DEF.$(USE_LINENOISE)) -c $(SQLITE3_SHELL_SRC) -o $@ $(OBJDIR)/linenoise.o: $(SRCDIR)/linenoise.c $(SRCDIR)/linenoise.h $(XTCC) -c $(SRCDIR)/linenoise.c -o $@ $(OBJDIR)/th.o: $(SRCDIR)/th.c $(XTCC) -c $(SRCDIR)/th.c -o $@ |
︙ | ︙ |
Changes to src/makeheaders.c.
︙ | ︙ | |||
1104 1105 1106 1107 1108 1109 1110 | ** If pTable is not NULL, then insert every identifier seen into the ** IdentTable. This includes any identifiers seen inside of {...}. ** ** The number of errors encountered is returned. An error is an ** unterminated token. */ static int GetBigToken(InStream *pIn, Token *pToken, IdentTable *pTable){ | | | 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 | ** If pTable is not NULL, then insert every identifier seen into the ** IdentTable. This includes any identifiers seen inside of {...}. ** ** The number of errors encountered is returned. An error is an ** unterminated token. */ static int GetBigToken(InStream *pIn, Token *pToken, IdentTable *pTable){ const char *zStart; int iStart; int nBrace; int c; int nLine; int nErr; nErr = GetNonspaceToken(pIn,pToken); |
︙ | ︙ | |||
1133 1134 1135 1136 1137 1138 1139 | if( pToken->zText[0]=='{' ) break; return nErr; default: return nErr; } | < | 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 | if( pToken->zText[0]=='{' ) break; return nErr; default: return nErr; } iStart = pIn->i; zStart = pToken->zText; nLine = pToken->nLine; nBrace = 1; while( nBrace ){ nErr += GetNonspaceToken(pIn,pToken); /* printf("%04d: nBrace=%d [%.*s]\n",pToken->nLine,nBrace, |
︙ | ︙ | |||
1679 1680 1681 1682 1683 1684 1685 | /* ** This routine is called when we see a method for a class that begins ** with the PUBLIC, PRIVATE, or PROTECTED keywords. Such methods are ** added to their class definitions. */ static int ProcessMethodDef(Token *pFirst, Token *pLast, int flags){ | < < | 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 | /* ** This routine is called when we see a method for a class that begins ** with the PUBLIC, PRIVATE, or PROTECTED keywords. Such methods are ** added to their class definitions. */ static int ProcessMethodDef(Token *pFirst, Token *pLast, int flags){ Token *pClass; char *zDecl; Decl *pDecl; String str; int type; pLast = pLast->pPrev; while( pFirst->zText[0]=='P' ){ int rc = 1; switch( pFirst->nText ){ case 6: rc = strncmp(pFirst->zText,"PUBLIC",6); break; case 7: rc = strncmp(pFirst->zText,"PRIVATE",7); break; case 9: rc = strncmp(pFirst->zText,"PROTECTED",9); break; |
︙ | ︙ | |||
1968 1969 1970 1971 1972 1973 1974 | && (flags & PS_Extern)==0 ){ fprintf(stderr,"%s:%d: Can't define a variable in this context\n", zFilename, pFirst->nLine); nErr++; } pName = FindDeclName(pFirst,pEnd->pPrev); if( pName==0 ){ | > > > > | | | > | 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 | && (flags & PS_Extern)==0 ){ fprintf(stderr,"%s:%d: Can't define a variable in this context\n", zFilename, pFirst->nLine); nErr++; } pName = FindDeclName(pFirst,pEnd->pPrev); if( pName==0 ){ if( pFirst->nText==4 && strncmp(pFirst->zText,"enum",4)==0 ){ /* Ignore completely anonymous enums. See documentation section 3.8.1. */ return nErr; }else{ fprintf(stderr,"%s:%d: Can't find a name for the object declared here.\n", zFilename, pFirst->nLine); return nErr+1; } } #ifdef DEBUG if( debugMask & PARSER ){ if( flags & PS_Typedef ){ printf("**** Found typedef %.*s at line %d...\n", pName->nText, pName->zText, pName->nLine); |
︙ | ︙ |
Changes to src/makeheaders.html.
1 2 3 4 5 6 7 8 | <html> <head><title>The Makeheaders Program</title></head> <body bgcolor=white> <h1 align=center>The Makeheaders Program</h1> <p> This document describes <em>makeheaders</em>, | | | | | | | | | | | | | | | | | > > | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 | <html> <head><title>The Makeheaders Program</title></head> <body bgcolor=white> <h1 align=center>The Makeheaders Program</h1> <p> This document describes <em>makeheaders</em>, a tool that automatically generates “<code>.h</code>” files for a C or C++ programming project. </p> <h2>Table Of Contents</h2> <ul> <li><a href="#H0002">1,0 Background</a> <ul> <li><a href="#H0003">1.1 Problems With The Traditional Approach</a> <li><a href="#H0004">1.2 The Makeheaders Solution</a> </ul> <li><a href="#H0005">2.0 Running The Makeheaders Program</a> <li><a href="#H0006">3.0 Preparing Source Files For Use With Makeheaders</a> <ul> <li><a href="#H0007">3.1 The Basic Setup</a> <li><a href="#H0008">3.2 What Declarations Get Copied</a> <li><a href="#H0009">3.3 How To Avoid Having To Write Any Header Files</a> <li><a href="#H0010">3.4 Designating Declarations For Export</a> <li><a href="#H0011">3.5 Local declarations processed by makeheaders</a> <li><a href="#H0012">3.6 Using Makeheaders With C++ Code</a> <li><a href="#H0013">3.7 Conditional Compilation</a> <li><a href="#H0014">3.8 Caveats</a> </ul> <li><a href="#H0015">4.0 Using Makeheaders To Generate Documentation</a> <li><a href="#H0016">5.0 Compiling The Makeheaders Program</a> <li><a href="#H0017">6.0 History</a> <li><a href="#H0018">7.0 Summary And Conclusion</a> </ul><a name="H0002"></a> <h2>1.0 Background</h2> <p> A piece of C source code can be one of two things: a <em>declaration</em> or a <em>definition</em>. A declaration is source text that gives information to the |
︙ | ︙ | |||
65 66 67 68 69 70 71 | <p> Declarations in C include things such as the following: <ul> <li> Typedefs. <li> Structure, union and enumeration declarations. <li> Function and procedure prototypes. <li> Preprocessor macros and #defines. | | | 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | <p> Declarations in C include things such as the following: <ul> <li> Typedefs. <li> Structure, union and enumeration declarations. <li> Function and procedure prototypes. <li> Preprocessor macros and #defines. <li> “<code>extern</code>” variable declarations. </ul> </p> <p> Definitions in C, on the other hand, include these kinds of things: <ul> <li> Variable definitions. |
︙ | ︙ | |||
87 88 89 90 91 92 93 | modern software engineering. Another way of looking at the difference is that the declaration is the <em>interface</em> and the definition is the <em>implementation</em>. </p> <p> In C programs, it has always been the tradition that declarations are | | | | | 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 | modern software engineering. Another way of looking at the difference is that the declaration is the <em>interface</em> and the definition is the <em>implementation</em>. </p> <p> In C programs, it has always been the tradition that declarations are put in files with the “<code>.h</code>” suffix and definitions are placed in “<code>.c</code>” files. The .c files contain “<code>#include</code>” preprocessor statements that cause the contents of .h files to be included as part of the source code when the .c file is compiled. In this way, the .h files define the interface to a subsystem and the .c files define how the subsystem is implemented. </p> <a name="H0003"></a> |
︙ | ︙ | |||
141 142 143 144 145 146 147 | files change, the entire program must be recompiled. It also happens that those important .h files tend to be the ones that change most frequently. This means that the entire program must be recompiled frequently, leading to a lengthy modify-compile-test cycle and a corresponding decrease in programmer productivity. <p><li> | | | | | 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 | files change, the entire program must be recompiled. It also happens that those important .h files tend to be the ones that change most frequently. This means that the entire program must be recompiled frequently, leading to a lengthy modify-compile-test cycle and a corresponding decrease in programmer productivity. <p><li> The C programming language requires that declarations depending upon each other must occur in a particular order. In a program with complex, interwoven data structures, the correct declaration order can become very difficult to determine manually, especially when the declarations involved are spread out over several files. </ol> </p> <a name="H0004"></a> <h3>1.2 The Makeheaders Solution</h3> <p> The makeheaders program is designed to ameliorate the problems associated with the traditional C programming model by automatically generating the interface information in the .h files from interface information contained in other .h files and from implementation information in the .c files. When the makeheaders program is run, it scans the source files for a project, then generates a series of new .h files, one for each .c file. The generated .h files contain exactly those declarations required by the corresponding .c files, no more and no less. |
︙ | ︙ | |||
193 194 195 196 197 198 199 | a problem. Simply rerun makeheaders to resynchronize everything. <p><li> The generated .h file contains the minimal set of declarations needed by the .c file. This means that when something changes, a minimal amount of recompilation is required to produce an updated executable. | | | 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 | a problem. Simply rerun makeheaders to resynchronize everything. <p><li> The generated .h file contains the minimal set of declarations needed by the .c file. This means that when something changes, a minimal amount of recompilation is required to produce an updated executable. Experience has shown that this gives a dramatic improvement in programmer productivity by facilitating a rapid modify-compile-test cycle during development. <p><li> The makeheaders program automatically sorts declarations into the correct order, completely eliminating the wearisome and error-prone task of sorting declarations by hand. </ol> |
︙ | ︙ | |||
235 236 237 238 239 240 241 | but manually entered .h files that contain structure declarations and so forth will be scanned and the declarations will be copied into the generated .h files as appropriate. But if makeheaders sees that the .h file that it has generated is no different from the .h file it generated last time, it doesn't update the file. | | | 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 | but manually entered .h files that contain structure declarations and so forth will be scanned and the declarations will be copied into the generated .h files as appropriate. But if makeheaders sees that the .h file that it has generated is no different from the .h file it generated last time, it doesn't update the file. This prevents the corresponding .c files from having to be needlessly recompiled. </p> <p> There are several options to the makeheaders program that can be used to alter its behavior. The default behavior is to write a single .h file for each .c file and |
︙ | ︙ | |||
260 261 262 263 264 265 266 | into the file of your choice. </p> <p> A similar option is -H. Like the lower-case -h option, big -H generates a single include file on standard output. But unlike small -h, the big -H only emits prototypes and declarations that | | | 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 | into the file of your choice. </p> <p> A similar option is -H. Like the lower-case -h option, big -H generates a single include file on standard output. But unlike small -h, the big -H only emits prototypes and declarations that have been designated as “exportable”. The idea is that -H will generate an include file that defines the interface to a library. More will be said about this in section 3.4. </p> <p> Sometimes you want the base name of the .c file and the .h file to |
︙ | ︙ | |||
291 292 293 294 295 296 297 | If you want a particular file to be scanned by makeheaders but you don't want makeheaders to generate a header file for that file, then you can supply an empty header filename, like this: <pre> makeheaders alpha.c beta.c gamma.c: </pre> In this example, makeheaders will scan the three files named | | | | | | | | | | | | > | > | | | | | | | | > | | | | | | | | | 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 | If you want a particular file to be scanned by makeheaders but you don't want makeheaders to generate a header file for that file, then you can supply an empty header filename, like this: <pre> makeheaders alpha.c beta.c gamma.c: </pre> In this example, makeheaders will scan the three files named “<code>alpha.c</code>”, “<code>beta.c</code>” and “<code>gamma.c</code>” but because of the colon on the end of third filename it will only generate headers for the first two files. Unfortunately, it is not possible to get makeheaders to process any file whose name contains a colon. </p> <p> In a large project, the length of the command line for makeheaders can become very long. If the operating system doesn't support long command lines (example: DOS and Win32) you may not be able to list all of the input files in the space available. In that case, you can use the “<code>-f</code>” option followed by the name of a file to cause makeheaders to read command line options and filename from the file instead of from the command line. For example, you might prepare a file named “<code>mkhdr.dat</code>” that contains text like this: <pre> src/alpha.c:hdr/alpha.h src/beta.c:hdr/beta.h src/gamma.c:hdr/gamma.h ... </pre> Then invoke makeheaders as follows: <pre> makeheaders -f mkhdr.dat </pre> </p> <p> The “<code>-local</code>” option causes makeheaders to generate of prototypes for “<code>static</code>” functions and procedures. Such prototypes are normally omitted. </p> <p> Finally, makeheaders also includes a “<code>-doc</code>” option. This command line option prevents makeheaders from generating any headers at all. Instead, makeheaders will write to standard output information about every definition and declaration that it encounters in its scan of source files. The information output includes the type of the definition or declaration and any comment that preceeds the definition or declaration. The output is in a format that can be easily parsed, and is intended to be read by another program that will generate documentation about the program. We'll talk more about this feature later. </p> <p> If you forget what command line options are available, or forget their exact name, you can invoke makeheaders using an unknown command line option (like “<code>--help</code>” or “<code>-?</code>”) and it will print a summary of the available options on standard error. If you need to process a file whose name begins with “<code>-</code>”, you can prepend a “<code>./</code>” to its name in order to get it accepted by the command line parser. Or, you can insert the special option “<code>--</code>” on the command line to cause all subsequent command line arguments to be treated as filenames even if their names begin with “<code>-</code>”. </p> <a name="H0006"></a> <h2>3.0 Preparing Source Files For Use With Makeheaders</h2> <p> Very little has to be done to prepare source files for use with makeheaders since makeheaders will read and understand ordinary C code. But it is important that you structure your files in a way that makes sense in the makeheaders context. This section will describe several typical uses of makeheaders. </p> <a name="H0007"></a> <h3>3.1 The Basic Setup</h3> <p> The simplest way to use makeheaders is to put all definitions in one or more .c files and all structure and type declarations in separate .h files. The only restriction is that you should take care to chose basenames for your .h files that are different from the basenames for your .c files. Recall that if your .c file is named (for example) “<code>alpha.c</code>” makeheaders will attempt to generate a corresponding header file named “<code>alpha.h</code>”. For that reason, you don't want to use that name for any of the .h files you write since that will prevent makeheaders from generating the .h file automatically. </p> <p> The structure of a .c file intented for use with makeheaders is very simple. All you have to do is add a single “<code>#include</code>” to the top of the file that sources the header file that makeheaders will generate. Hence, the beginning of a source file named “<code>alpha.c</code>” might look something like this: </p> <pre> /* * Introductory comment... */ #include "alpha.h" /* The rest of your code... */ </pre> <p> Your manually generated header files require no special attention at all. Code them as you normally would. However, makeheaders will work better if you omit the “<code>#if</code>” statements people often put around the outside of header files that prevent the files from being included more than once. For example, to create a header file named “<code>beta.h</code>”, many people will habitually write the following: <pre> #ifndef BETA_H #define BETA_H /* declarations for beta.h go here */ #endif </pre> You can forego this cleverness with makeheaders. Remember that the header files you write will never really be included by any C code. Instead, makeheaders will scan your header files to extract only those declarations that are needed by individual .c files and then copy those declarations to the .h files corresponding to the .c files. Hence, the “<code>#if</code>” wrapper serves no useful purpose. But it does make makeheaders work harder, forcing it to put the statements <pre> #if !defined(BETA_H) #endif </pre> |
︙ | ︙ | |||
457 458 459 460 461 462 463 | <pre> makeheaders *.[ch] </pre> The makeheaders program will scan all of the .c files and all of the manually written .h files and then automatically generate .h files | | | | | | | | | | | | | | | > | > | | | | | | | 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 | <pre> makeheaders *.[ch] </pre> The makeheaders program will scan all of the .c files and all of the manually written .h files and then automatically generate .h files corresponding to all .c files. </p> <p> Note that the wildcard expression used in the above example, “<code>*.[ch]</code>”, will expand to include all .h files in the current directory, both those entered manually be the programmer and others generated automatically by a prior run of makeheaders. But that is not a problem. The makeheaders program will recognize and ignore any files it has previously generated that show up on its input list. </p> <a name="H0008"></a> <h3>3.2 What Declarations Get Copied</h3> <p> The following list details all of the code constructs that makeheaders will extract and place in the automatically generated .h files: </p> <ul> <p><li> When a function is defined in any .c file, a prototype of that function is placed in the generated .h file of every .c file that calls the function.</p> <P>If the “<code>static</code>” keyword of C appears at the beginning of the function definition, the prototype is suppressed. If you use the “<code>LOCAL</code>” keyword where you would normally say “<code>static</code>”, then a prototype is generated, but it will only appear in the single header file that corresponds to the source file containing the function. For example, if the file <code>alpha.c</code> contains the following: <pre> LOCAL int testFunc(void){ return 0; } </pre> Then the header file <code>alpha.h</code> will contain <pre> #define LOCAL static LOCAL int testFunc(void); </pre> However, no other generated header files will contain a prototype for <code>testFunc()</code> since the function has only file scope.</p> <p>When the “<code>LOCAL</code>” keyword is used, makeheaders will also generate a #define for LOCAL, like this: <pre> #define LOCAL static </pre> so that the C compiler will know what it means.</p> <p>If you invoke makeheaders with a “<code>-local</code>” command-line option, then it treats the “<code>static</code>” keyword like “<code>LOCAL</code>” and generates prototypes in the header file that corresponds to the source file containing the function definition.</p> <p><li> When a global variable is defined in a .c file, an “<code>extern</code>” declaration of that variable is placed in the header of every .c file that uses the variable. </p> <p><li> When a structure, union or enumeration declaration or a function prototype or a C++ class declaration appears in a manually produced .h file, that declaration is copied into the automatically generated .h files of all .c files that use the structure, union, enumeration, function or class. But declarations that appear in a .c file are considered private to that .c file and are not copied into any automatically generated files. </p> <p><li> All #defines and typedefs that appear in manually produced .h files are copied into automatically generated .h files as needed. Similar constructs that appear in .c files are considered private to those files and are not copied. </p> <p><li> When a structure, union or enumeration declaration appears in a .h file, makeheaders will automatically generate a typedef that allows the declaration to be referenced without the “<code>struct</code>”, “<code>union</code>” or “<code>enum</code>” qualifier. In other words, if makeheaders sees the code: <pre> struct Examp { /* ... */ }; </pre> it will automatically generate a corresponding typedef like this: <pre> typedef struct Examp Examp; </pre> </p> <p><li> Makeheaders generates an error message if it encounters a function or variable definition within a .h file. The .h files are suppose to contain only interface, not implementation. C compilers will not enforce this convention, but makeheaders does. </ul> <p> As a final note, we observe that automatically generated declarations are ordered as required by the ANSI-C programming language. If the declaration of some structure “<code>X</code>” requires a prior declaration of another structure “<code>Y</code>”, then Y will appear first in the generated headers. </p> <a name="H0009"></a> <h3>3.3 How To Avoid Having To Write Any Header Files</h3> <p> In my experience, large projects work better if all of the manually |
︙ | ︙ | |||
608 609 610 611 612 613 614 | <p> You can instruct makeheaders to treat any part of a .c file as if it were a .h file by enclosing that part of the .c file within: <pre> #if INTERFACE #endif </pre> | | | | | | | | 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 | <p> You can instruct makeheaders to treat any part of a .c file as if it were a .h file by enclosing that part of the .c file within: <pre> #if INTERFACE #endif </pre> Thus any structure definitions that appear after the “<code>#if INTERFACE</code>” but before the corresponding “<code>#endif</code>” are eligable to be copied into the automatically generated .h files of other .c files. </p> <p> If you use the “<code>#if INTERFACE</code>” mechanism in a .c file, then the generated header for that .c file will contain a line like this: <pre> #define INTERFACE 0 </pre> In other words, the C compiler will never see any of the text that defines the interface. But makeheaders will copy all necessary definitions and declarations into the .h file it generates, so .c files will compile as if the declarations were really there. This approach has the advantage that you don't have to worry with putting the declarations in the correct ANSI-C order -- makeheaders will do that for you automatically. </p> <p> Note that you don't have to use this approach exclusively. You can put some declarations in .h files and others within the “<code>#if INTERFACE</code>” regions of .c files. Makeheaders treats all declarations alike, no matter where they come from. You should also note that a single .c file can contain as many “<code>#if INTERFACE</code>” regions as desired. </p> <a name="H0010"></a> <h3>3.4 Designating Declarations For Export</h3> <p> In a large project, one will often construct a hierarchy of |
︙ | ︙ | |||
662 663 664 665 666 667 668 | (The second interface is normally a subset of the first.) Ordinary C does not provide support for a tiered interface like this, but makeheaders does. </p> <p> Using makeheaders, it is possible to designate routines and data | | | 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 | (The second interface is normally a subset of the first.) Ordinary C does not provide support for a tiered interface like this, but makeheaders does. </p> <p> Using makeheaders, it is possible to designate routines and data structures as being for “<code>export</code>”. Exported objects are visible not only to other files within the same library or subassembly but also to other libraries and subassemblies in the larger program. By default, makeheaders only makes objects visible to other members of the same library. </p> |
︙ | ︙ | |||
688 689 690 691 692 693 694 | This is not a perfect solution, but it works well in practice. </p> <p> But trouble quickly arises when we attempt to devise a mechanism for telling makeheaders which prototypes it should export and which it should keep local. | | | | | 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 | This is not a perfect solution, but it works well in practice. </p> <p> But trouble quickly arises when we attempt to devise a mechanism for telling makeheaders which prototypes it should export and which it should keep local. The built-in “<code>static</code>” keyword of C works well for prohibiting prototypes from leaving a single source file, but because C doesn't support a linkage hierarchy, there is nothing in the C language to help us. We'll have to invite our own keyword: “<code>EXPORT</code>” </p> <p> Makeheaders allows the EXPORT keyword to precede any function or procedure definition. The routine following the EXPORT keyword is then eligable to appear in the header file generated using the -H command line option. |
︙ | ︙ | |||
724 725 726 727 728 729 730 | are visible to all files within the library, any declarations or definitions within <pre> #if EXPORT_INTERFACE #endif </pre> will become part of the exported interface. | | | | | | | | | > | | > | 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 | are visible to all files within the library, any declarations or definitions within <pre> #if EXPORT_INTERFACE #endif </pre> will become part of the exported interface. The “<code>#if EXPORT_INTERFACE</code>” mechanism can be used in either .c or .h files. (The “<code>#if INTERFACE</code>” can also be used in both .h and .c files, but since it's use in a .h file would be redundant, we haven't mentioned it before.) </p> <a name="H0011"></a> <h3>3.5 Local declarations processed by makeheaders</h3> <p> Structure declarations and typedefs that appear in .c files are normally ignored by makeheaders. Such declarations are only intended for use by the source file in which they appear and so makeheaders doesn't need to copy them into any generated header files. We call such declarations “<code>private</code>”. </p> <p> Sometimes it is convenient to have makeheaders sort a sequence of private declarations into the correct order for us automatically. Or, we could have static functions and procedures for which we would like makeheaders to generate prototypes, but the arguments to these functions and procedures uses private declarations. In both of these cases, we want makeheaders to be aware of the private declarations and copy them into the local header file, but we don't want makeheaders to propagate the declarations outside of the file in which they are declared. </p> <p> When this situation arises, enclose the private declarations within <pre> #if LOCAL_INTERFACE #endif </pre> A “<code>LOCAL_INTERFACE</code>” block works very much like the “<code>INTERFACE</code>” and “<code>EXPORT_INTERFACE</code>” blocks described above, except that makeheaders insures that the objects declared in a LOCAL_INTERFACE are only visible to the file containing the LOCAL_INTERFACE. </p> <a name="H0012"></a> <h3>3.6 Using Makeheaders With C++ Code</h3> <p> You can use makeheaders to generate header files for C++ code, in addition to C. Makeheaders will recognize and copy both “<code>class</code>” declarations and inline function definitions, and it knows not to try to generate prototypes for methods. </p> <p> In fact, makeheaders is smart enough to be used in projects that employ a mixture of C and C++. |
︙ | ︙ | |||
803 804 805 806 807 808 809 | <p> No special command-line options are required to use makeheaders with C++ input. Makeheaders will recognize that its source code is C++ by the suffix on the source code filename. Simple ".c" or ".h" suffixes are assumed to be ANSI-C. Anything else, including ".cc", ".C" and ".cpp" is assumed to be C++. The name of the header file generated by makeheaders is derived from | | | | | | | 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 | <p> No special command-line options are required to use makeheaders with C++ input. Makeheaders will recognize that its source code is C++ by the suffix on the source code filename. Simple ".c" or ".h" suffixes are assumed to be ANSI-C. Anything else, including ".cc", ".C" and ".cpp" is assumed to be C++. The name of the header file generated by makeheaders is derived from the name of the source file by converting every "c" to "h" and every "C" to "H" in the suffix of the filename. Thus the C++ source file “<code>alpha.cpp</code>” will induce makeheaders to generate a header file named “<code>alpha.hpp</code>”. </p> <p> Makeheaders augments class definitions by inserting prototypes to methods where appropriate. If a method definition begins with one of the special keywords <b>PUBLIC</b>, <b>PROTECTED</b>, or <b>PRIVATE</b> (in upper-case to distinguish them from the regular C++ keywords with the same meaning) then a prototype for that method will be inserted into the class definition. If none of these keywords appear, then the prototype is not inserted. For example, in the following code, the constructor is not explicitly declared in the class definition but makeheaders will add it there because of the PUBLIC keyword that appears before the constructor |
︙ | ︙ | |||
865 866 867 868 869 870 871 | </p> <h4>3.6.1 C++ Limitations</h4> <p> Makeheaders does not understand more recent C++ syntax such as templates and namespaces. | | | | | | 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 | </p> <h4>3.6.1 C++ Limitations</h4> <p> Makeheaders does not understand more recent C++ syntax such as templates and namespaces. Perhaps these issues will be addressed in future revisions. </p> <a name="H0013"></a> <h3>3.7 Conditional Compilation</h3> <p> The makeheaders program understands and tracks the conditional compilation constructs in the source code files it scans. Hence, if the following code appears in a source file <pre> #ifdef UNIX # define WORKS_WELL 1 #else # define WORKS_WELL 0 #endif </pre> then the next patch of code will appear in the generated header for every .c file that uses the WORKS_WELL constant: <pre> #if defined(UNIX) # define WORKS_WELL 1 |
︙ | ︙ | |||
916 917 918 919 920 921 922 | </p> <p> Makeheaders does not understand the old K&R style of function and procedure definitions. It only understands the modern ANSI-C style, and will probably become very confused if it encounters an old K&R function. | | | > > > > > > > > > > > > > > > > > > > > | 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 | </p> <p> Makeheaders does not understand the old K&R style of function and procedure definitions. It only understands the modern ANSI-C style, and will probably become very confused if it encounters an old K&R function. Therefore you should take care to avoid putting K&R function definitions in your code. </p> <p> Makeheaders does not support defining an enumerated or aggregate type in the same statement as a variable declaration. None of the following statements work completely: <pre> struct {int field;} a; struct Tag {int field;} b; struct Tag c; </pre> Instead, define types separately from variables: <pre> #if INTERFACE struct Tag {int field;}; #endif Tag b, c; </pre> See <a href="#H0008">3.2 What Declarations Get Copied</a> for details, including on the automatic typedef. </p> <p> Makeheaders does not understand when you define more than one global variable with the same type separated by a comma. In other words, makeheaders does not understand this: <pre> |
︙ | ︙ | |||
971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 | For most projects the code constructs that makeheaders cannot handle are very rare. As long as you avoid excessive cleverness, makeheaders will probably be able to figure out what you want and will do the right thing. </p> <a name="H0015"></a> <h2>4.0 Using Makeheaders To Generate Documentation</h2> <p> Many people have observed the advantages of generating program documentation directly from the source code: <ul> <li> Less effort is involved. It is easier to write a program than it is to write a program and a document. <li> The documentation is more likely to agree with the code. When documentation is derived directly from the code, or is contained in comments immediately adjacent to the code, it is much more likely to be correct than if it is contained in a separate unrelated file in a different part of the source tree. <li> Information is kept in only one place. When a change occurs in the code, it is not necessary to make a corresponding change in a separate document. Just rerun the documentation generator. </ul> The makeheaders program does not generate program documentation itself. But you can use makeheaders to parse the program source code, extract | > > > > > > > > > > > > > > > | | | | | | | | | > | > > > > > > > > > > > > > > > > > > > > > > > > > | | > | 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 | For most projects the code constructs that makeheaders cannot handle are very rare. As long as you avoid excessive cleverness, makeheaders will probably be able to figure out what you want and will do the right thing. </p> <p> Makeheaders has limited understanding of enums. In particular, it does not realize the significance of enumerated values, so the enum is not emitted in the header files when its enumerated values are used unless the name associated with the enum is also used. Moreover, enums can be completely anonymous, e.g. “<code>enum {X, Y, Z};</code>”. Makeheaders ignores such enums so they can at least be used within a single source file. Makeheaders expects you to use #define constants instead. If you want enum features that #define lacks, and you need the enum in the interface, bypass makeheaders and write a header file by hand, or teach makeheaders to emit the enum definition when any of the enumerated values are used, rather than only when the top-level name (if any) is used. </p> <a name="H0015"></a> <h2>4.0 Using Makeheaders To Generate Documentation</h2> <p> Many people have observed the advantages of generating program documentation directly from the source code: <ul> <li> Less effort is involved. It is easier to write a program than it is to write a program and a document. <li> The documentation is more likely to agree with the code. When documentation is derived directly from the code, or is contained in comments immediately adjacent to the code, it is much more likely to be correct than if it is contained in a separate unrelated file in a different part of the source tree. <li> Information is kept in only one place. When a change occurs in the code, it is not necessary to make a corresponding change in a separate document. Just rerun the documentation generator. </ul> The makeheaders program does not generate program documentation itself. But you can use makeheaders to parse the program source code, extract the information that is relevant to the documentation and to pass this information to another tool to do the actual documentation preparation. </p> <p> When makeheaders is run with the “<code>-doc</code>” option, it emits no header files at all. Instead, it does a complete dump of its internal tables to standard output in a form that is easily parsed. This output can then be used by another program (the implementation of which is left as an exercise to the reader) that will use the information to prepare suitable documentation. </p> <p> The “<code>-doc</code>” option causes makeheaders to print information to standard output about all of the following objects: <ul> <li> C++ class declarations <li> Structure and union declarations <li> Enumerations <li> Typedefs <li> Procedure and function definitions <li> Global variables <li> Preprocessor macros (ex: “<code>#define</code>”) </ul> For each of these objects, the following information is output: <ul> <li> The name of the object. <li> The type of the object. (Structure, typedef, macro, etc.) <li> Flags to indicate if the declaration is exported (contained within an EXPORT_INTERFACE block) or local (contained with LOCAL_INTERFACE). <li> A flag to indicate if the object is declared in a C++ file. <li> The name of the file in which the object was declared. <li> The complete text of any block comment that preceeds the declarations. <li> If the declaration occurred inside a preprocessor conditional (“<code>#if</code>”) then the text of that conditional is provided. <li> The complete text of a declaration for the object. </ul> The exact output format will not be described here. It is simple to understand and parse and should be obvious to anyone who inspects some sample output. </p> <a name="H0016"></a> <h2>5.0 Compiling The Makeheaders Program</h2> <p> The source code for makeheaders is a single file of ANSI-C code, approximately 3000 lines in length. The program makes only modest demands of the system and C library and should compile without alteration on most ANSI C compilers and on most operating systems. It is known to compile using several variations of GCC for Unix as well as Cygwin32 and MSVC 5.0 for Win32. </p> <a name="H0017"></a> <h2>6.0 History</h2> <p> The makeheaders program was first written by D. Richard Hipp (also the original author of <a href="https://sqlite.org/">SQLite</a> and <a href="https://www.fossil-scm.org/">Fossil</a>) in 1993. Hipp open-sourced the project immediately, but it never caught on with any other developers and it continued to be used mostly by Hipp himself for over a decade. When Hipp was first writing the Fossil version control system in 2006 and 2007, he used makeheaders on that project to help simplify the source code. As the popularity of Fossil increased, the makeheaders that was incorporated into the Fossil source tree became the "official" makeheaders implementation. </p> <p> As this paragraph is being composed (2016-11-05), Fossil is the only project known to Hipp that is still using makeheaders. On the other hand, makeheaders has served the Fossil project well and there are no plans remove it. </p> <a name="H0018"></a> <h2>7.0 Summary And Conclusion</h2> <p> The makeheaders program will automatically generate a minimal header file for each of a set of C source and header files, and will generate a composite header file for the entire source file suite, for either internal or external use. It can also be used as the parser in an automated program documentation system. </p> <p> The makeheaders program has been in use since 1994, in a wide variety of projects under both UNIX and Win32. In every project where it has been used, makeheaders has proven to be a very helpful aid in the construction and maintenance of large C codes. In at least two cases, makeheaders has facilitated development of programs that would have otherwise been all but impossible due to their size and complexity. </p> </body> </html> |
Changes to src/makemake.tcl.
︙ | ︙ | |||
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 | content db delta deltacmd descendants diff diffcmd doc encode event export file finfo foci fusefs glob graph gzip http http_socket http_transport | > > | 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 | content db delta deltacmd descendants diff diffcmd dispatch doc encode event export file finfo foci fshell fusefs glob graph gzip http http_socket http_transport |
︙ | ︙ | |||
120 121 122 123 124 125 126 127 128 129 130 131 132 133 | tar th_main timeline tkt tktsetup undo unicode update url user utf8 util verify vfile | > | 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 | tar th_main timeline tkt tktsetup undo unicode unversioned update url user utf8 util verify vfile |
︙ | ︙ | |||
150 151 152 153 154 155 156 | ../skins/*/*.txt } # Options used to compile the included SQLite library. # set SQLITE_OPTIONS { -DNDEBUG=1 | > > > > > > > > | > > < < | 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 | ../skins/*/*.txt } # Options used to compile the included SQLite library. # set SQLITE_OPTIONS { -DNDEBUG=1 -DSQLITE_THREADSAFE=0 -DSQLITE_DEFAULT_MEMSTATUS=0 -DSQLITE_DEFAULT_WAL_SYNCHRONOUS=1 -DSQLITE_LIKE_DOESNT_MATCH_BLOBS -DSQLITE_OMIT_DECLTYPE -DSQLITE_OMIT_DEPRECATED -DSQLITE_OMIT_PROGRESS_CALLBACK -DSQLITE_OMIT_SHARED_CACHE -DSQLITE_OMIT_LOAD_EXTENSION -DSQLITE_MAX_EXPR_DEPTH=0 -DSQLITE_USE_ALLOCA -DSQLITE_ENABLE_LOCKING_STYLE=0 -DSQLITE_DEFAULT_FILE_FORMAT=4 -DSQLITE_ENABLE_EXPLAIN_COMMENTS -DSQLITE_ENABLE_FTS4 -DSQLITE_ENABLE_FTS3_PARENTHESIS -DSQLITE_ENABLE_DBSTAT_VTAB -DSQLITE_ENABLE_JSON1 -DSQLITE_ENABLE_FTS5 } |
︙ | ︙ | |||
242 243 244 245 246 247 248 249 250 251 252 253 254 255 | # This file is automatically generated. Instead of editing this # file, edit "makemake.tcl" then run "tclsh makemake.tcl" # to regenerate this file. # # This file is included by primary Makefile. # XTCC = $(TCC) -I. -I$(SRCDIR) -I$(OBJDIR) $(TCCFLAGS) $(CFLAGS) } writeln -nonewline "SRC =" foreach s [lsort $src] { writeln -nonewline " \\\n \$(SRCDIR)/$s.c" } | > | 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 | # This file is automatically generated. Instead of editing this # file, edit "makemake.tcl" then run "tclsh makemake.tcl" # to regenerate this file. # # This file is included by primary Makefile. # XBCC = $(BCC) $(BCCFLAGS) $(CFLAGS) XTCC = $(TCC) -I. -I$(SRCDIR) -I$(OBJDIR) $(TCCFLAGS) $(CFLAGS) } writeln -nonewline "SRC =" foreach s [lsort $src] { writeln -nonewline " \\\n \$(SRCDIR)/$s.c" } |
︙ | ︙ | |||
285 286 287 288 289 290 291 | codecheck: $(TRANS_SRC) $(OBJDIR)/codecheck1 $(OBJDIR)/codecheck1 $(TRANS_SRC) $(OBJDIR): -mkdir $(OBJDIR) $(OBJDIR)/translate: $(SRCDIR)/translate.c | | | | | | | | 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 | codecheck: $(TRANS_SRC) $(OBJDIR)/codecheck1 $(OBJDIR)/codecheck1 $(TRANS_SRC) $(OBJDIR): -mkdir $(OBJDIR) $(OBJDIR)/translate: $(SRCDIR)/translate.c $(XBCC) -o $(OBJDIR)/translate $(SRCDIR)/translate.c $(OBJDIR)/makeheaders: $(SRCDIR)/makeheaders.c $(XBCC) -o $(OBJDIR)/makeheaders $(SRCDIR)/makeheaders.c $(OBJDIR)/mkindex: $(SRCDIR)/mkindex.c $(XBCC) -o $(OBJDIR)/mkindex $(SRCDIR)/mkindex.c $(OBJDIR)/mkbuiltin: $(SRCDIR)/mkbuiltin.c $(XBCC) -o $(OBJDIR)/mkbuiltin $(SRCDIR)/mkbuiltin.c $(OBJDIR)/mkversion: $(SRCDIR)/mkversion.c $(XBCC) -o $(OBJDIR)/mkversion $(SRCDIR)/mkversion.c $(OBJDIR)/codecheck1: $(SRCDIR)/codecheck1.c $(XBCC) -o $(OBJDIR)/codecheck1 $(SRCDIR)/codecheck1.c # Run the test suite. # Other flags that can be included in TESTFLAGS are: # # -halt Stop testing after the first failed test # -keep Keep the temporary workspace for debugging # -prot Write a detailed log of the tests to the file ./prot |
︙ | ︙ | |||
336 337 338 339 340 341 342 | # Setup the options used to compile the included miniz library. MINIZ_OPTIONS = <<<MINIZ_OPTIONS>>> # The USE_SYSTEM_SQLITE variable may be undefined, set to 0, or set # to 1. If it is set to 1, then there is no need to build or link # the sqlite3.o object. Instead, the system SQLite will be linked # using -lsqlite3. | | | | 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 | # Setup the options used to compile the included miniz library. MINIZ_OPTIONS = <<<MINIZ_OPTIONS>>> # The USE_SYSTEM_SQLITE variable may be undefined, set to 0, or set # to 1. If it is set to 1, then there is no need to build or link # the sqlite3.o object. Instead, the system SQLite will be linked # using -lsqlite3. SQLITE3_OBJ.0 = $(OBJDIR)/sqlite3.o SQLITE3_OBJ.1 = SQLITE3_OBJ. = $(SQLITE3_OBJ.0) # The FOSSIL_ENABLE_MINIZ variable may be undefined, set to 0, or # set to 1. If it is set to 1, the miniz library included in the # source tree should be used; otherwise, it should not. MINIZ_OBJ.0 = MINIZ_OBJ.1 = $(OBJDIR)/miniz.o |
︙ | ︙ | |||
365 366 367 368 369 370 371 372 373 374 375 376 377 378 | # 0, ordinary SQLite is used. If 1, then sqlite3-see.c (not part of # the source tree) is used and extra flags are provided to enable # the SQLite Encryption Extension. SQLITE3_SRC.0 = sqlite3.c SQLITE3_SRC.1 = sqlite3-see.c SQLITE3_SRC. = sqlite3.c SQLITE3_SRC = $(SRCDIR)/$(SQLITE3_SRC.$(USE_SEE)) SEE_FLAGS.0 = SEE_FLAGS.1 = -DSQLITE_HAS_CODEC SEE_FLAGS. = SEE_FLAGS = $(SEE_FLAGS.$(USE_SEE)) }] writeln [string map [list <<<NEXT_LINE>>> \\] { | > > > > | 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 | # 0, ordinary SQLite is used. If 1, then sqlite3-see.c (not part of # the source tree) is used and extra flags are provided to enable # the SQLite Encryption Extension. SQLITE3_SRC.0 = sqlite3.c SQLITE3_SRC.1 = sqlite3-see.c SQLITE3_SRC. = sqlite3.c SQLITE3_SRC = $(SRCDIR)/$(SQLITE3_SRC.$(USE_SEE)) SQLITE3_SHELL_SRC.0 = shell.c SQLITE3_SHELL_SRC.1 = shell-see.c SQLITE3_SHELL_SRC. = shell.c SQLITE3_SHELL_SRC = $(SRCDIR)/$(SQLITE3_SHELL_SRC.$(USE_SEE)) SEE_FLAGS.0 = SEE_FLAGS.1 = -DSQLITE_HAS_CODEC SEE_FLAGS. = SEE_FLAGS = $(SEE_FLAGS.$(USE_SEE)) }] writeln [string map [list <<<NEXT_LINE>>> \\] { |
︙ | ︙ | |||
436 437 438 439 440 441 442 | writeln "\$(OBJDIR)/$s.h:\t\$(OBJDIR)/headers\n" } writeln "\$(OBJDIR)/sqlite3.o:\t\$(SQLITE3_SRC)" writeln "\t\$(XTCC) \$(SQLITE_OPTIONS) \$(SQLITE_CFLAGS) \$(SEE_FLAGS) \\" writeln "\t\t-c \$(SQLITE3_SRC) -o \$@" | | | | 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 | writeln "\$(OBJDIR)/$s.h:\t\$(OBJDIR)/headers\n" } writeln "\$(OBJDIR)/sqlite3.o:\t\$(SQLITE3_SRC)" writeln "\t\$(XTCC) \$(SQLITE_OPTIONS) \$(SQLITE_CFLAGS) \$(SEE_FLAGS) \\" writeln "\t\t-c \$(SQLITE3_SRC) -o \$@" writeln "\$(OBJDIR)/shell.o:\t\$(SQLITE3_SHELL_SRC) \$(SRCDIR)/sqlite3.h" writeln "\t\$(XTCC) \$(SHELL_OPTIONS) \$(SHELL_CFLAGS) \$(LINENOISE_DEF.\$(USE_LINENOISE)) -c \$(SQLITE3_SHELL_SRC) -o \$@\n" writeln "\$(OBJDIR)/linenoise.o:\t\$(SRCDIR)/linenoise.c \$(SRCDIR)/linenoise.h" writeln "\t\$(XTCC) -c \$(SRCDIR)/linenoise.c -o \$@\n" writeln "\$(OBJDIR)/th.o:\t\$(SRCDIR)/th.c" writeln "\t\$(XTCC) -c \$(SRCDIR)/th.c -o \$@\n" |
︙ | ︙ | |||
514 515 516 517 518 519 520 521 522 523 524 525 526 | # the following to point from the build directory to the src/ folder. # SRCDIR = src #### The directory into which object code files should be written. # OBJDIR = wbld #### C Compiler and options for use in building executables that # will run on the platform that is doing the build. This is used # to compile code-generator programs as part of the build process. # See TCC below for the C compiler for building the finished binary. # | > > > > > > > > | | 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 | # the following to point from the build directory to the src/ folder. # SRCDIR = src #### The directory into which object code files should be written. # OBJDIR = wbld #### C compiler for use in building executables that will run on # the platform that is doing the build. This is used to compile # code-generator programs as part of the build process. See TCC # and TCCEXE below for the C compiler for building the finished # binary. # BCCEXE = gcc #### C Compiler and options for use in building executables that # will run on the platform that is doing the build. This is used # to compile code-generator programs as part of the build process. # See TCC below for the C compiler for building the finished binary. # BCC = $(BCCEXE) #### Enable compiling with debug symbols (much larger binary) # # FOSSIL_ENABLE_SYMBOLS = 1 #### Enable JSON (http://www.json.org) support using "cson" # |
︙ | ︙ | |||
616 617 618 619 620 621 622 | # used, taking into account whether zlib is actually enabled and the target # processor architecture. # ifndef X64 SSLCONFIG = mingw ifndef FOSSIL_ENABLE_MINIZ ZLIBCONFIG = LOC="-DASMV -DASMINF" OBJA="inffas86.o match.o" | | | | | | 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 | # used, taking into account whether zlib is actually enabled and the target # processor architecture. # ifndef X64 SSLCONFIG = mingw ifndef FOSSIL_ENABLE_MINIZ ZLIBCONFIG = LOC="-DASMV -DASMINF" OBJA="inffas86.o match.o" ZLIBTARGETS = $(ZLIBDIR)/inffas86.o $(ZLIBDIR)/match.o else ZLIBCONFIG = ZLIBTARGETS = endif else SSLCONFIG = mingw64 ZLIBCONFIG = ZLIBTARGETS = endif #### Disable creation of the OpenSSL shared libraries. Also, disable support # for both SSLv2 and SSLv3 (i.e. thereby forcing the use of TLS). # SSLCONFIG += no-ssl2 no-ssl3 no-shared #### When using zlib, make sure that OpenSSL is configured to use the zlib # that Fossil knows about (i.e. the one within the source tree). # ifndef FOSSIL_ENABLE_MINIZ SSLCONFIG += --with-zlib-lib=$(PWD)/$(ZLIBDIR) --with-zlib-include=$(PWD)/$(ZLIBDIR) zlib endif #### The directories where the OpenSSL include and library files are located. # The recommended usage here is to use the Sysinternals junction tool # to create a hard link between an "openssl-1.x" sub-directory of the # Fossil source code directory and the target OpenSSL source directory. # OPENSSLDIR = $(SRCDIR)/../compat/openssl-1.0.2j OPENSSLINCDIR = $(OPENSSLDIR)/include OPENSSLLIBDIR = $(OPENSSLDIR) #### Either the directory where the Tcl library is installed or the Tcl # source code directory resides (depending on the value of the macro # FOSSIL_TCL_SOURCE). If this points to the Tcl install directory, # this directory must have "include" and "lib" sub-directories. If |
︙ | ︙ | |||
685 686 687 688 689 690 691 | endif TCLTARGET = libtclstub86.a else LIBTCL = -ltcl86 TCLTARGET = binaries endif | > > > > > > > > | | | | 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 | endif TCLTARGET = libtclstub86.a else LIBTCL = -ltcl86 TCLTARGET = binaries endif #### C compiler for use in building executables that will run on the # target platform. This is usually the same as BCCEXE, unless you # are cross-compiling. This C compiler builds the finished binary # for fossil. See BCC and BCCEXE above for the C compiler for # building intermediate code-generator tools. # TCCEXE = gcc #### C compiler and options for use in building executables that will # run on the target platform. This is usually the almost the same # as BCC, unless you are cross-compiling. This C compiler builds # the finished binary for fossil. The BCC compiler above is used # for building intermediate code-generator tools. # TCC = $(PREFIX)$(TCCEXE) -Wall #### Add the necessary command line options to build with debugging # symbols, if enabled. # ifdef FOSSIL_ENABLE_SYMBOLS TCC += -g else |
︙ | ︙ | |||
830 831 832 833 834 835 836 | ifdef USE_SYSTEM_SQLITE LIB += -lsqlite3 endif #### OpenSSL: Add the necessary libraries required, if enabled. # ifdef FOSSIL_ENABLE_SSL | | | 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 | ifdef USE_SYSTEM_SQLITE LIB += -lsqlite3 endif #### OpenSSL: Add the necessary libraries required, if enabled. # ifdef FOSSIL_ENABLE_SSL LIB += -lssl -lcrypto -lgdi32 -lcrypt32 endif #### Tcl: Add the necessary libraries required, if enabled. # ifdef FOSSIL_ENABLE_TCL LIB += $(LIBTCL) endif |
︙ | ︙ | |||
884 885 886 887 888 889 890 891 892 893 894 895 896 897 | #### Include a configuration file that can override any one of these settings. # -include config.w32 # STOP HERE # You should not need to change anything below this line #-------------------------------------------------------- XTCC = $(TCC) $(CFLAGS) -I. -I$(SRCDIR) } writeln -nonewline "SRC =" foreach s [lsort $src] { writeln -nonewline " \\\n \$(SRCDIR)/$s.c" } writeln "\n" | > | 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 | #### Include a configuration file that can override any one of these settings. # -include config.w32 # STOP HERE # You should not need to change anything below this line #-------------------------------------------------------- XBCC = $(BCC) $(CFLAGS) XTCC = $(TCC) $(CFLAGS) -I. -I$(SRCDIR) } writeln -nonewline "SRC =" foreach s [lsort $src] { writeln -nonewline " \\\n \$(SRCDIR)/$s.c" } writeln "\n" |
︙ | ︙ | |||
980 981 982 983 984 985 986 | ifdef USE_WINDOWS $(MKDIR) $(subst /,\,$(OBJDIR)) else $(MKDIR) $(OBJDIR) endif $(TRANSLATE): $(SRCDIR)/translate.c | | | | | | | | | > > > > < < < < < < > > > > > | | > > | | | | | | | 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 | ifdef USE_WINDOWS $(MKDIR) $(subst /,\,$(OBJDIR)) else $(MKDIR) $(OBJDIR) endif $(TRANSLATE): $(SRCDIR)/translate.c $(XBCC) -o $@ $(SRCDIR)/translate.c $(MAKEHEADERS): $(SRCDIR)/makeheaders.c $(XBCC) -o $@ $(SRCDIR)/makeheaders.c $(MKINDEX): $(SRCDIR)/mkindex.c $(XBCC) -o $@ $(SRCDIR)/mkindex.c $(MKBUILTIN): $(SRCDIR)/mkbuiltin.c $(XBCC) -o $@ $(SRCDIR)/mkbuiltin.c $(MKVERSION): $(SRCDIR)/mkversion.c $(XBCC) -o $@ $(SRCDIR)/mkversion.c $(CODECHECK1): $(SRCDIR)/codecheck1.c $(XBCC) -o $@ $(SRCDIR)/codecheck1.c # WARNING. DANGER. Running the test suite modifies the repository the # build is done from, i.e. the checkout belongs to. Do not sync/push # the repository after running the tests. test: $(OBJDIR) $(APPNAME) $(TCLSH) $(SRCDIR)/../test/tester.tcl $(APPNAME) $(OBJDIR)/VERSION.h: $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest $(MKVERSION) $(MKVERSION) $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest $(SRCDIR)/../VERSION >$@ # The USE_SYSTEM_SQLITE variable may be undefined, set to 0, or set # to 1. If it is set to 1, then there is no need to build or link # the sqlite3.o object. Instead, the system SQLite will be linked # using -lsqlite3. SQLITE3_OBJ.0 = $(OBJDIR)/sqlite3.o SQLITE3_OBJ.1 = SQLITE3_OBJ. = $(SQLITE3_OBJ.0) # The FOSSIL_ENABLE_MINIZ variable may be undefined, set to 0, or # set to 1. If it is set to 1, the miniz library included in the # source tree should be used; otherwise, it should not. MINIZ_OBJ.0 = MINIZ_OBJ.1 = $(OBJDIR)/miniz.o MINIZ_OBJ. = $(MINIZ_OBJ.0) # The USE_SEE variable may be undefined, 0 or 1. If undefined or # 0, ordinary SQLite is used. If 1, then sqlite3-see.c (not part of # the source tree) is used and extra flags are provided to enable # the SQLite Encryption Extension. SQLITE3_SRC.0 = sqlite3.c SQLITE3_SRC.1 = sqlite3-see.c SQLITE3_SRC. = sqlite3.c SQLITE3_SRC = $(SRCDIR)/$(SQLITE3_SRC.$(USE_SEE)) SQLITE3_SHELL_SRC.0 = shell.c SQLITE3_SHELL_SRC.1 = shell-see.c SQLITE3_SHELL_SRC. = shell.c SQLITE3_SHELL_SRC = $(SRCDIR)/$(SQLITE3_SHELL_SRC.$(USE_SEE)) SEE_FLAGS.0 = SEE_FLAGS.1 = -DSQLITE_HAS_CODEC SEE_FLAGS. = SEE_FLAGS = $(SEE_FLAGS.$(USE_SEE)) } writeln [string map [list <<<NEXT_LINE>>> \\] { EXTRAOBJ = <<<NEXT_LINE>>> $(SQLITE3_OBJ.$(USE_SYSTEM_SQLITE)) <<<NEXT_LINE>>> $(MINIZ_OBJ.$(FOSSIL_ENABLE_MINIZ)) <<<NEXT_LINE>>> $(OBJDIR)/shell.o <<<NEXT_LINE>>> $(OBJDIR)/th.o <<<NEXT_LINE>>> $(OBJDIR)/th_lang.o <<<NEXT_LINE>>> $(OBJDIR)/th_tcl.o <<<NEXT_LINE>>> $(OBJDIR)/cson_amalgamation.o }] writeln { $(ZLIBDIR)/inffas86.o: $(TCC) -c -o $@ -DASMINF -I$(ZLIBDIR) -O3 $(ZLIBDIR)/contrib/inflate86/inffas86.c $(ZLIBDIR)/match.o: $(TCC) -c -o $@ -DASMV $(ZLIBDIR)/contrib/asm686/match.S zlib: $(ZLIBTARGETS) $(MAKE) -C $(ZLIBDIR) PREFIX=$(PREFIX) CC=$(PREFIX)$(TCCEXE) $(ZLIBCONFIG) -f win32/Makefile.gcc libz.a clean-zlib: $(MAKE) -C $(ZLIBDIR) PREFIX=$(PREFIX) CC=$(PREFIX)$(TCCEXE) -f win32/Makefile.gcc clean ifdef FOSSIL_ENABLE_MINIZ BLDTARGETS = else BLDTARGETS = zlib endif openssl: $(BLDTARGETS) cd $(OPENSSLLIBDIR);./Configure --cross-compile-prefix=$(PREFIX) $(SSLCONFIG) $(MAKE) -C $(OPENSSLLIBDIR) PREFIX=$(PREFIX) CC=$(PREFIX)$(TCCEXE) build_libs clean-openssl: $(MAKE) -C $(OPENSSLLIBDIR) PREFIX=$(PREFIX) CC=$(PREFIX)$(TCCEXE) clean tcl: cd $(TCLSRCDIR)/win;./configure $(MAKE) -C $(TCLSRCDIR)/win PREFIX=$(PREFIX) CC=$(PREFIX)$(TCCEXE) $(TCLTARGET) clean-tcl: $(MAKE) -C $(TCLSRCDIR)/win PREFIX=$(PREFIX) CC=$(PREFIX)$(TCCEXE) distclean APPTARGETS += $(BLDTARGETS) ifdef FOSSIL_BUILD_SSL APPTARGETS += openssl endif $(APPNAME): $(APPTARGETS) $(OBJDIR)/headers $(CODECHECK1) $(OBJ) $(EXTRAOBJ) $(OBJDIR)/fossil.o $(CODECHECK1) $(TRANS_SRC) |
︙ | ︙ | |||
1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 | foreach s [lsort $src] { writeln "\$(OBJDIR)/${s}_.c:\t\$(SRCDIR)/$s.c \$(TRANSLATE)" writeln "\t\$(TRANSLATE) \$(SRCDIR)/$s.c >\$@\n" writeln "\$(OBJDIR)/$s.o:\t\$(OBJDIR)/${s}_.c \$(OBJDIR)/$s.h$extra_h($s)\$(SRCDIR)/config.h" writeln "\t\$(XTCC) -o \$(OBJDIR)/$s.o -c \$(OBJDIR)/${s}_.c\n" writeln "\$(OBJDIR)/${s}.h:\t\$(OBJDIR)/headers\n" } set SQLITE_WIN32_OPTIONS $SQLITE_OPTIONS lappend SQLITE_WIN32_OPTIONS -DSQLITE_WIN32_NO_ANSI set MINGW_SQLITE_OPTIONS $SQLITE_WIN32_OPTIONS | > > > | | | | 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 | foreach s [lsort $src] { writeln "\$(OBJDIR)/${s}_.c:\t\$(SRCDIR)/$s.c \$(TRANSLATE)" writeln "\t\$(TRANSLATE) \$(SRCDIR)/$s.c >\$@\n" writeln "\$(OBJDIR)/$s.o:\t\$(OBJDIR)/${s}_.c \$(OBJDIR)/$s.h$extra_h($s)\$(SRCDIR)/config.h" writeln "\t\$(XTCC) -o \$(OBJDIR)/$s.o -c \$(OBJDIR)/${s}_.c\n" writeln "\$(OBJDIR)/${s}.h:\t\$(OBJDIR)/headers\n" } writeln {MINGW_OPTIONS = -D_HAVE__MINGW_H } set SQLITE_WIN32_OPTIONS $SQLITE_OPTIONS lappend SQLITE_WIN32_OPTIONS -DSQLITE_WIN32_NO_ANSI set MINGW_SQLITE_OPTIONS $SQLITE_WIN32_OPTIONS lappend MINGW_SQLITE_OPTIONS {$(MINGW_OPTIONS)} lappend MINGW_SQLITE_OPTIONS -DSQLITE_USE_MALLOC_H lappend MINGW_SQLITE_OPTIONS -DSQLITE_USE_MSIZE set MINIZ_WIN32_OPTIONS $MINIZ_OPTIONS set j " \\\n " writeln "SQLITE_OPTIONS = [join $MINGW_SQLITE_OPTIONS $j]\n" set j " \\\n " writeln "SHELL_OPTIONS = [join $SHELL_WIN32_OPTIONS $j]\n" set j " \\\n " writeln "MINIZ_OPTIONS = [join $MINIZ_WIN32_OPTIONS $j]\n" writeln "\$(OBJDIR)/sqlite3.o:\t\$(SQLITE3_SRC) \$(SRCDIR)/../win/Makefile.mingw" writeln "\t\$(XTCC) \$(SQLITE_OPTIONS) \$(SQLITE_CFLAGS) \$(SEE_FLAGS) \\" writeln "\t\t-c \$(SQLITE3_SRC) -o \$@\n" writeln "\$(OBJDIR)/cson_amalgamation.o:\t\$(SRCDIR)/cson_amalgamation.c" writeln "\t\$(XTCC) -c \$(SRCDIR)/cson_amalgamation.c -o \$@\n" writeln "\$(OBJDIR)/json.o \$(OBJDIR)/json_artifact.o \$(OBJDIR)/json_branch.o \$(OBJDIR)/json_config.o \$(OBJDIR)/json_diff.o \$(OBJDIR)/json_dir.o \$(OBJDIR)/jsos_finfo.o \$(OBJDIR)/json_login.o \$(OBJDIR)/json_query.o \$(OBJDIR)/json_report.o \$(OBJDIR)/json_status.o \$(OBJDIR)/json_tag.o \$(OBJDIR)/json_timeline.o \$(OBJDIR)/json_user.o \$(OBJDIR)/json_wiki.o : \$(SRCDIR)/json_detail.h\n" writeln "\$(OBJDIR)/shell.o:\t\$(SQLITE3_SHELL_SRC) \$(SRCDIR)/sqlite3.h \$(SRCDIR)/../win/Makefile.mingw" writeln "\t\$(XTCC) \$(SHELL_OPTIONS) \$(SHELL_CFLAGS) -c \$(SQLITE3_SHELL_SRC) -o \$@\n" writeln "\$(OBJDIR)/th.o:\t\$(SRCDIR)/th.c" writeln "\t\$(XTCC) -c \$(SRCDIR)/th.c -o \$@\n" writeln "\$(OBJDIR)/th_lang.o:\t\$(SRCDIR)/th_lang.c" writeln "\t\$(XTCC) -c \$(SRCDIR)/th_lang.c -o \$@\n" |
︙ | ︙ | |||
1264 1265 1266 1267 1268 1269 1270 | writeln "\t+echo fossil >> \$@" writeln "\t+echo \$(LIBS) >> \$@" writeln "\t+echo. >> \$@" writeln "\t+echo fossil >> \$@" writeln { translate$E: $(SRCDIR)\translate.c | | | | | | | | 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 | writeln "\t+echo fossil >> \$@" writeln "\t+echo \$(LIBS) >> \$@" writeln "\t+echo. >> \$@" writeln "\t+echo fossil >> \$@" writeln { translate$E: $(SRCDIR)\translate.c $(XBCC) -o$@ $** makeheaders$E: $(SRCDIR)\makeheaders.c $(XBCC) -o$@ $** mkindex$E: $(SRCDIR)\mkindex.c $(XBCC) -o$@ $** mkbuiltin$E: $(SRCDIR)\mkbuiltin.c $(XBCC) -o$@ $** mkversion$E: $(SRCDIR)\mkversion.c $(XBCC) -o$@ $** codecheck1$E: $(SRCDIR)\codecheck1.c $(XBCC) -o$@ $** $(OBJDIR)\shell$O : $(SRCDIR)\shell.c $(TCC) -o$@ -c $(SHELL_OPTIONS) $(SQLITE_OPTIONS) $(SHELL_CFLAGS) $** $(OBJDIR)\sqlite3$O : $(SRCDIR)\sqlite3.c $(TCC) -o$@ -c $(SQLITE_OPTIONS) $(SQLITE_CFLAGS) $** |
︙ | ︙ | |||
1458 1459 1460 1461 1462 1463 1464 | # Enable support for the SQLite Encryption Extension? !ifndef USE_SEE USE_SEE = 0 !endif !if $(FOSSIL_ENABLE_SSL)!=0 | | | | 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 | # Enable support for the SQLite Encryption Extension? !ifndef USE_SEE USE_SEE = 0 !endif !if $(FOSSIL_ENABLE_SSL)!=0 SSLDIR = $(B)\compat\openssl-1.0.2j SSLINCDIR = $(SSLDIR)\inc32 !if $(FOSSIL_DYNAMIC_BUILD)!=0 SSLLIBDIR = $(SSLDIR)\out32dll !else SSLLIBDIR = $(SSLDIR)\out32 !endif SSLLFLAGS = /nologo /opt:ref /debug SSLLIB = ssleay32.lib libeay32.lib user32.lib gdi32.lib crypt32.lib !if "$(PLATFORM)"=="amd64" || "$(PLATFORM)"=="x64" !message Using 'x64' platform for OpenSSL... # BUGBUG (OpenSSL): Using "no-ssl*" here breaks the build. # SSLCONFIG = VC-WIN64A no-asm no-ssl2 no-ssl3 SSLCONFIG = VC-WIN64A no-asm !if $(FOSSIL_DYNAMIC_BUILD)!=0 SSLCONFIG = $(SSLCONFIG) shared |
︙ | ︙ | |||
1786 1787 1788 1789 1790 1791 1792 | writeln "!endif" writeln "\techo \$(LIBS) $redir \$@" writeln { $(OX): @-mkdir $@ translate$E: $(SRCDIR)\translate.c | | | | | | | > > > | > > > | | 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 | writeln "!endif" writeln "\techo \$(LIBS) $redir \$@" writeln { $(OX): @-mkdir $@ translate$E: $(SRCDIR)\translate.c $(XBCC) $** makeheaders$E: $(SRCDIR)\makeheaders.c $(XBCC) $** mkindex$E: $(SRCDIR)\mkindex.c $(XBCC) $** mkbuiltin$E: $(SRCDIR)\mkbuiltin.c $(XBCC) $** mkversion$E: $(SRCDIR)\mkversion.c $(XBCC) $** codecheck1$E: $(SRCDIR)\codecheck1.c $(XBCC) $** !if $(USE_SEE)!=0 SQLITE3_SHELL_SRC = $(SRCDIR)\shell-see.c !else SQLITE3_SHELL_SRC = $(SRCDIR)\shell.c !endif $(OX)\shell$O : $(SQLITE3_SHELL_SRC) $B\win\Makefile.msc $(TCC) /Fo$@ $(SHELL_OPTIONS) $(SQLITE_OPTIONS) $(SHELL_CFLAGS) -c $(SQLITE3_SHELL_SRC) !if $(USE_SEE)!=0 SQLITE3_SRC = $(SRCDIR)\sqlite3-see.c !else SQLITE3_SRC = $(SRCDIR)\sqlite3.c !endif |
︙ | ︙ |
Changes to src/manifest.c.
︙ | ︙ | |||
1418 1419 1420 1421 1422 1423 1424 | if( *ppOther==0 ) return; } if( fetch_baseline(pParent, 0) || fetch_baseline(pChild, 0) ){ manifest_destroy(*ppOther); return; } isPublic = !content_is_private(mid); | | | 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 | if( *ppOther==0 ) return; } if( fetch_baseline(pParent, 0) || fetch_baseline(pChild, 0) ){ manifest_destroy(*ppOther); return; } isPublic = !content_is_private(mid); /* If pParent is not the primary parent of pChild, and the primary ** parent of pChild is a phantom, then abort this routine without ** doing any work. The mlink entries will be computed when the ** primary parent dephantomizes. */ if( !isPrim && otherRid==mid && !db_exists("SELECT 1 FROM blob WHERE uuid=%Q AND size>0", |
︙ | ︙ | |||
1528 1529 1530 1531 1532 1533 1534 | if( pChildFile==0 && pParentFile->zUuid!=0 ){ add_one_mlink(pmid, pParentFile->zUuid, mid, 0, pParentFile->zName, 0, isPublic, isPrim, 0); } } } manifest_cache_insert(*ppOther); | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > | 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 | if( pChildFile==0 && pParentFile->zUuid!=0 ){ add_one_mlink(pmid, pParentFile->zUuid, mid, 0, pParentFile->zName, 0, isPublic, isPrim, 0); } } } manifest_cache_insert(*ppOther); /* If pParent is the primary parent of pChild, also run this analysis ** for all merge parents of pChild */ if( isPrim ){ for(i=1; i<pChild->nParent; i++){ pmid = uuid_to_rid(pChild->azParent[i], 0); if( pmid<=0 ) continue; add_mlink(pmid, 0, mid, pChild, 0); } } } /* ** For a check-in with RID "rid" that has nParent parent check-ins given ** by the UUIDs in azParent[], create all appropriate plink and mlink table ** entries. ** ** The primary parent is the first UUID on the azParent[] list. ** ** Return the RID of the primary parent. */ static int manifest_add_checkin_linkages( int rid, /* The RID of the check-in */ Manifest *p, /* Manifest for this check-in */ int nParent, /* Number of parents for this check-in */ char **azParent /* UUIDs for each parent */ ){ int i; int parentid = 0; char zBaseId[30]; /* Baseline manifest RID for deltas. "NULL" otherwise */ Stmt q; if( p->zBaseline ){ sqlite3_snprintf(sizeof(zBaseId), zBaseId, "%d", uuid_to_rid(p->zBaseline,1)); }else{ sqlite3_snprintf(sizeof(zBaseId), zBaseId, "NULL"); } for(i=0; i<nParent; i++){ int pid = uuid_to_rid(azParent[i], 1); db_multi_exec( "INSERT OR IGNORE INTO plink(pid, cid, isprim, mtime, baseid)" "VALUES(%d, %d, %d, %.17g, %s)", pid, rid, i==0, p->rDate, zBaseId/*safe-for-%s*/); if( i==0 ) parentid = pid; } add_mlink(parentid, 0, rid, p, 1); if( nParent>1 ){ /* Change MLINK.PID from 0 to -1 for files that are added by merge. */ db_multi_exec( "UPDATE mlink SET pid=-1" " WHERE mid=%d" " AND pid=0" " AND fnid IN " " (SELECT fnid FROM mlink WHERE mid=%d GROUP BY fnid" " HAVING count(*)<%d)", rid, rid, nParent ); } db_prepare(&q, "SELECT cid, isprim FROM plink WHERE pid=%d", rid); while( db_step(&q)==SQLITE_ROW ){ int cid = db_column_int(&q, 0); int isprim = db_column_int(&q, 1); add_mlink(rid, p, cid, 0, isprim); } db_finalize(&q); if( nParent==0 ){ /* For root files (files without parents) add mlink entries ** showing all content as new. */ int isPublic = !content_is_private(rid); for(i=0; i<p->nFile; i++){ add_one_mlink(0, 0, rid, p->aFile[i].zUuid, p->aFile[i].zName, 0, isPublic, 1, manifest_file_mperm(&p->aFile[i])); } } return parentid; } /* ** There exists a "parent" tag against checkin rid that has value zValue. ** If value is well-formed (meaning that it is a list of UUIDs), then use ** zValue to reparent check-in rid. */ void manifest_reparent_checkin(int rid, const char *zValue){ int nParent; char *zCopy = 0; char **azParent = 0; Manifest *p = 0; int i; int n = (int)strlen(zValue); nParent = (n+1)/(UUID_SIZE+1); if( nParent*(UUID_SIZE+1) - 1 !=n ) return; if( nParent<1 ) return; zCopy = fossil_strdup(zValue); azParent = fossil_malloc( sizeof(azParent[0])*nParent ); for(i=0; i<nParent; i++){ azParent[i] = &zCopy[i*(UUID_SIZE+1)]; if( i<nParent-1 && azParent[i][UUID_SIZE]!=' ' ) break; azParent[i][UUID_SIZE] = 0; if( !validate16(azParent[i],UUID_SIZE) ) break; } if( i==nParent && !db_exists("SELECT 1 FROM plink WHERE cid=%d AND pid=%d", rid, uuid_to_rid(azParent[0],0)) ){ p = manifest_get(rid, CFTYPE_MANIFEST, 0); } if( p!=0 ){ db_multi_exec( "DELETE FROM plink WHERE cid=%d;" "DELETE FROM mlink WHERE mid=%d;", rid, rid ); manifest_add_checkin_linkages(rid,p,nParent,azParent); } manifest_destroy(p); fossil_free(azParent); fossil_free(zCopy); } /* ** Setup to do multiple manifest_crosslink() calls. ** ** This routine creates TEMP tables for holding information for ** processing that must be deferred until all artifacts have been ** seen at least once. The deferred processing is accomplished ** by the call to manifest_crosslink_end(). */ void manifest_crosslink_begin(void){ assert( manifest_crosslink_busy==0 ); manifest_crosslink_busy = 1; db_begin_transaction(); db_multi_exec( "CREATE TEMP TABLE pending_tkt(uuid TEXT UNIQUE);" |
︙ | ︙ | |||
1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 | assert( manifest_crosslink_busy==1 ); if( permitHooks ){ rc = xfer_run_common_script(); if( rc==TH_OK ){ zScript = xfer_ticket_code(); } } db_prepare(&q, "SELECT uuid FROM pending_tkt"); while( db_step(&q)==SQLITE_ROW ){ const char *zUuid = db_column_text(&q, 0); ticket_rebuild_entry(zUuid); if( permitHooks && rc==TH_OK ){ rc = xfer_run_script(zScript, zUuid, 0); } | > > > > > > > > > > > | 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 | assert( manifest_crosslink_busy==1 ); if( permitHooks ){ rc = xfer_run_common_script(); if( rc==TH_OK ){ zScript = xfer_ticket_code(); } } db_prepare(&q, "SELECT rid, value FROM tagxref" " WHERE tagid=%d AND tagtype=1", TAG_PARENT ); while( db_step(&q)==SQLITE_ROW ){ int rid = db_column_int(&q,0); const char *zValue = db_column_text(&q,1); manifest_reparent_checkin(rid, zValue); } db_finalize(&q); db_prepare(&q, "SELECT uuid FROM pending_tkt"); while( db_step(&q)==SQLITE_ROW ){ const char *zUuid = db_column_text(&q, 0); ticket_rebuild_entry(zUuid); if( permitHooks && rc==TH_OK ){ rc = xfer_run_script(zScript, zUuid, 0); } |
︙ | ︙ | |||
1796 1797 1798 1799 1800 1801 1802 | ** Processing for other control artifacts was added later. The name ** of the routine, "manifest_crosslink", and the name of this source ** file, is a legacy of its original use. */ int manifest_crosslink(int rid, Blob *pContent, int flags){ int i, rc = TH_OK; Manifest *p; | < | 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 | ** Processing for other control artifacts was added later. The name ** of the routine, "manifest_crosslink", and the name of this source ** file, is a legacy of its original use. */ int manifest_crosslink(int rid, Blob *pContent, int flags){ int i, rc = TH_OK; Manifest *p; int parentid = 0; int permitHooks = (flags & MC_PERMIT_HOOKS); const char *zScript = 0; const char *zUuid = 0; if( (p = manifest_cache_find(rid))!=0 ){ blob_reset(pContent); |
︙ | ︙ | |||
1835 1836 1837 1838 1839 1840 1841 | if( p->type==CFTYPE_MANIFEST ){ if( permitHooks ){ zScript = xfer_commit_code(); zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); } if( !db_exists("SELECT 1 FROM mlink WHERE mid=%d", rid) ){ char *zCom; | < < < < < < < < < < < < < | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 | if( p->type==CFTYPE_MANIFEST ){ if( permitHooks ){ zScript = xfer_commit_code(); zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); } if( !db_exists("SELECT 1 FROM mlink WHERE mid=%d", rid) ){ char *zCom; parentid = manifest_add_checkin_linkages(rid,p,p->nParent,p->azParent); search_doc_touch('c', rid, 0); db_multi_exec( "REPLACE INTO event(type,mtime,objid,user,comment," "bgcolor,euser,ecomment,omtime)" "VALUES('ci'," " coalesce(" " (SELECT julianday(value) FROM tagxref WHERE tagid=%d AND rid=%d)," |
︙ | ︙ | |||
2065 2066 2067 2068 2069 2070 2071 | const char *zName = db_column_text(&qatt, 3); const char isAdd = (zSrc && zSrc[0]) ? 1 : 0; char *zComment; if( isAdd ){ zComment = mprintf( "Add attachment [/artifact/%!S|%h] to" " tech note [/technote/%!S|%S]", | | | | 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 | const char *zName = db_column_text(&qatt, 3); const char isAdd = (zSrc && zSrc[0]) ? 1 : 0; char *zComment; if( isAdd ){ zComment = mprintf( "Add attachment [/artifact/%!S|%h] to" " tech note [/technote/%!S|%S]", zSrc, zName, zTarget, zTarget); }else{ zComment = mprintf( "Delete attachment \"%h\" from" " tech note [/technote/%!S|%S]", zName, zTarget, zTarget); } db_multi_exec("UPDATE event SET comment=%Q, type='e'" " WHERE objid=%Q", zComment, zAttachId); fossil_free(zComment); } db_finalize(&qatt); } if( p->type==CFTYPE_TICKET ){ char *zTag; Stmt qatt; assert( manifest_crosslink_busy==1 ); |
︙ | ︙ | |||
2112 2113 2114 2115 2116 2117 2118 | }else{ zComment = mprintf("Delete attachment \"%h\" from ticket [%!S|%S]", zName, zTarget, zTarget); } db_multi_exec("UPDATE event SET comment=%Q, type='t'" " WHERE objid=%Q", zComment, zAttachId); | | | | | 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 | }else{ zComment = mprintf("Delete attachment \"%h\" from ticket [%!S|%S]", zName, zTarget, zTarget); } db_multi_exec("UPDATE event SET comment=%Q, type='t'" " WHERE objid=%Q", zComment, zAttachId); fossil_free(zComment); } db_finalize(&qatt); } if( p->type==CFTYPE_ATTACHMENT ){ char *zComment = 0; const char isAdd = (p->zAttachSrc && p->zAttachSrc[0]) ? 1 : 0; /* We assume that we're attaching to a wiki page until we ** prove otherwise (which could on a later artifact if we ** process the attachment artifact before the artifact to ** which it is attached!) */ char attachToType = 'w'; if( fossil_is_uuid(p->zAttachTarget) ){ if( db_exists("SELECT 1 FROM tag WHERE tagname='tkt-%q'", p->zAttachTarget) ){ attachToType = 't'; /* Attaching to known ticket */ }else if( db_exists("SELECT 1 FROM tag WHERE tagname='event-%q'", p->zAttachTarget) ){ attachToType = 'e'; /* Attaching to known tech note */ } } db_multi_exec( "INSERT INTO attachment(attachid, mtime, src, target," "filename, comment, user)" |
︙ | ︙ | |||
2163 2164 2165 2166 2167 2168 2169 | zComment = mprintf("Delete attachment \"%h\" from wiki page [%h]", p->zAttachName, p->zAttachTarget); } }else if( 'e' == attachToType ){ if( isAdd ){ zComment = mprintf( "Add attachment [/artifact/%!S|%h] to tech note [/technote/%!S|%S]", | | | | 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 | zComment = mprintf("Delete attachment \"%h\" from wiki page [%h]", p->zAttachName, p->zAttachTarget); } }else if( 'e' == attachToType ){ if( isAdd ){ zComment = mprintf( "Add attachment [/artifact/%!S|%h] to tech note [/technote/%!S|%S]", p->zAttachSrc, p->zAttachName, p->zAttachTarget, p->zAttachTarget); }else{ zComment = mprintf( "Delete attachment \"/artifact/%!S|%h\" from" " tech note [/technote/%!S|%S]", p->zAttachName, p->zAttachName, p->zAttachTarget,p->zAttachTarget); } }else{ if( isAdd ){ zComment = mprintf( "Add attachment [/artifact/%!S|%h] to ticket [%!S|%S]", p->zAttachSrc, p->zAttachName, p->zAttachTarget, p->zAttachTarget); }else{ zComment = mprintf("Delete attachment \"%h\" from ticket [%!S|%S]", |
︙ | ︙ | |||
2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 | continue; }else if( strcmp(zName, "+date")==0 ){ blob_appendf(&comment, " Timestamp %h.", zValue); continue; }else if( memcmp(zName, "-sym-",5)==0 ){ if( !branchMove ){ blob_appendf(&comment, " Cancel tag \"%h\"", &zName[5]); } }else if( memcmp(zName, "*sym-",5)==0 ){ if( !branchMove ){ blob_appendf(&comment, " Add propagating tag \"%h\"", &zName[5]); } }else if( memcmp(zName, "+sym-",5)==0 ){ blob_appendf(&comment, " Add tag \"%h\"", &zName[5]); }else if( strcmp(zName, "+closed")==0 ){ | > > > > | | | 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 | continue; }else if( strcmp(zName, "+date")==0 ){ blob_appendf(&comment, " Timestamp %h.", zValue); continue; }else if( memcmp(zName, "-sym-",5)==0 ){ if( !branchMove ){ blob_appendf(&comment, " Cancel tag \"%h\"", &zName[5]); }else{ continue; } }else if( memcmp(zName, "*sym-",5)==0 ){ if( !branchMove ){ blob_appendf(&comment, " Add propagating tag \"%h\"", &zName[5]); }else{ continue; } }else if( memcmp(zName, "+sym-",5)==0 ){ blob_appendf(&comment, " Add tag \"%h\"", &zName[5]); }else if( strcmp(zName, "+closed")==0 ){ blob_append(&comment, " Mark \"Closed\"", -1); }else if( strcmp(zName, "-closed")==0 ){ blob_append(&comment, " Remove the \"Closed\" mark", -1); }else { if( zName[0]=='-' ){ blob_appendf(&comment, " Cancel \"%h\"", &zName[1]); }else if( zName[0]=='+' ){ blob_appendf(&comment, " Add \"%h\"", &zName[1]); }else{ blob_appendf(&comment, " Add propagating \"%h\"", &zName[1]); |
︙ | ︙ |
Changes to src/markdown.c.
︙ | ︙ | |||
33 34 35 36 37 38 39 | ********************/ #if INTERFACE /* mkd_autolink -- type of autolink */ enum mkd_autolink { MKDA_NOT_AUTOLINK, /* used internally when it is not an autolink*/ | | | | 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 | ********************/ #if INTERFACE /* mkd_autolink -- type of autolink */ enum mkd_autolink { MKDA_NOT_AUTOLINK, /* used internally when it is not an autolink*/ MKDA_NORMAL, /* normal http/http/ftp link */ MKDA_EXPLICIT_EMAIL, /* e-mail link with explicit mailto: */ MKDA_IMPLICIT_EMAIL /* e-mail link without mailto: */ }; /* mkd_renderer -- functions for rendering parsed data */ struct mkd_renderer { /* document level callbacks */ void (*prolog)(struct Blob *ob, void *opaque); |
︙ | ︙ | |||
293 294 295 296 297 298 299 | if( i>=size ) return 0; /* binary search of the tag */ key.text = data; key.size = i; return bsearch(&key, block_tags, | | | 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 | if( i>=size ) return 0; /* binary search of the tag */ key.text = data; key.size = i; return bsearch(&key, block_tags, count(block_tags), sizeof block_tags[0], cmp_html_tag); } /* new_work_buffer -- get a new working buffer from the stack or create one */ static struct Blob *new_work_buffer(struct render *rndr){ |
︙ | ︙ | |||
346 347 348 349 350 351 352 | i++; } if( i>=size || data[i]!='>' || nb!=1 ) return 0; return i+1; } | | | 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 | i++; } if( i>=size || data[i]!='>' || nb!=1 ) return 0; return i+1; } /* tag_length -- returns the length of the given tag, or 0 if it's not valid */ static size_t tag_length(char *data, size_t size, enum mkd_autolink *autolink){ size_t i, j; /* a valid tag can't be shorter than 3 chars */ if( size<3 ) return 0; /* begins with a '<' optionally followed by '/', followed by letter */ |
︙ | ︙ | |||
401 402 403 404 405 406 407 | /* one of the forbidden chars has been found */ *autolink = MKDA_NOT_AUTOLINK; }else if( (j = is_mail_autolink(data+i, size-i))!=0 ){ *autolink = (i==8) ? MKDA_EXPLICIT_EMAIL : MKDA_IMPLICIT_EMAIL; return i+j; } | | | 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 | /* one of the forbidden chars has been found */ *autolink = MKDA_NOT_AUTOLINK; }else if( (j = is_mail_autolink(data+i, size-i))!=0 ){ *autolink = (i==8) ? MKDA_EXPLICIT_EMAIL : MKDA_IMPLICIT_EMAIL; return i+j; } /* looking for something looking like a tag end */ while( i<size && data[i]!='>' ){ i++; } if( i>=size ) return 0; return i+1; } /* parse_inline -- parses inline markdown elements */ |
︙ | ︙ | |||
518 519 520 521 522 523 524 | i++; } } return 0; } | | | 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 | i++; } } return 0; } /* parse_emph1 -- parsing single emphasis */ /* closed by a symbol not preceded by whitespace and not followed by symbol */ static size_t parse_emph1( struct Blob *ob, struct render *rndr, char *data, size_t size, char c |
︙ | ︙ | |||
563 564 565 566 567 568 569 | return r ? i+1 : 0; } } return 0; } | | | 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 | return r ? i+1 : 0; } } return 0; } /* parse_emph2 -- parsing single emphasis */ static size_t parse_emph2( struct Blob *ob, struct render *rndr, char *data, size_t size, char c ){ |
︙ | ︙ | |||
602 603 604 605 606 607 608 | } i++; } return 0; } | | | 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 | } i++; } return 0; } /* parse_emph3 -- parsing single emphasis */ /* finds the first closing tag, and delegates to the other emph */ static size_t parse_emph3( struct Blob *ob, struct render *rndr, char *data, size_t size, char c |
︙ | ︙ | |||
775 776 777 778 779 780 781 | } } return 2; } /* char_entity -- '&' escaped when it doesn't belong to an entity */ | | | 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 | } } return 2; } /* char_entity -- '&' escaped when it doesn't belong to an entity */ /* valid entities are assumed to be anything matching &#?[A-Za-z0-9]+; */ static size_t char_entity( struct Blob *ob, struct render *rndr, char *data, size_t offset, size_t size ){ |
︙ | ︙ | |||
1020 1021 1022 1023 1024 1025 1026 | if( id_end>=size ) goto char_link_cleanup; if( i+1==id_end ){ /* implicit id - use the contents */ id_data = data+1; id_size = txt_e-1; }else{ | | | 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 | if( id_end>=size ) goto char_link_cleanup; if( i+1==id_end ){ /* implicit id - use the contents */ id_data = data+1; id_size = txt_e-1; }else{ /* explicit id - between brackets */ id_data = data+i+1; id_size = id_end-(i+1); } if( get_link_ref(rndr, link, title, id_data, id_size)<0 ){ goto char_link_cleanup; } |
︙ | ︙ | |||
1136 1137 1138 1139 1140 1141 1142 | return (i>=size || data[i]=='\n') ? 2 : 0; } return 0; } | | | 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 | return (i>=size || data[i]=='\n') ? 2 : 0; } return 0; } /* is_table_sep -- returns whether there is a table separator at pos */ static int is_table_sep(char *data, size_t pos){ return data[pos]=='|' && (pos==0 || data[pos-1]!='\\'); } /* is_tableline -- returns the number of column tables in the given line */ static int is_tableline(char *data, size_t size){ |
︙ | ︙ | |||
1185 1186 1187 1188 1189 1190 1191 | } }else{ return 0; } } | | | 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 | } }else{ return 0; } } /* prefix_code -- returns prefix length for block code */ static size_t prefix_code(char *data, size_t size){ if( size>0 && data[0]=='\t' ) return 1; if( size>3 && data[0]==' ' && data[1]==' ' && data[2]==' ' && data[3]==' ' ){ return 4; } return 0; } |
︙ | ︙ | |||
1242 1243 1244 1245 1246 1247 1248 | static void parse_block( struct Blob *ob, struct render *rndr, char *data, size_t size); | | | 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 | static void parse_block( struct Blob *ob, struct render *rndr, char *data, size_t size); /* parse_blockquote -- handles parsing of a blockquote fragment */ static size_t parse_blockquote( struct Blob *ob, struct render *rndr, char *data, size_t size ){ size_t beg, end = 0, pre, work_size = 0; |
︙ | ︙ | |||
1292 1293 1294 1295 1296 1297 1298 | rndr->make.blockquote(ob, out ? out : &fallback, rndr->make.opaque); } release_work_buffer(rndr, out); return end; } | | | 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 | rndr->make.blockquote(ob, out ? out : &fallback, rndr->make.opaque); } release_work_buffer(rndr, out); return end; } /* parse_paragraph -- handles parsing of a regular paragraph */ static size_t parse_paragraph( struct Blob *ob, struct render *rndr, char *data, size_t size ){ size_t i = 0, end = 0; |
︙ | ︙ | |||
1375 1376 1377 1378 1379 1380 1381 | release_work_buffer(rndr, span); } } return end; } | | | 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 | release_work_buffer(rndr, span); } } return end; } /* parse_blockcode -- handles parsing of a block-level code fragment */ static size_t parse_blockcode( struct Blob *ob, struct render *rndr, char *data, size_t size ){ size_t beg, end, pre; |
︙ | ︙ | |||
1811 1812 1813 1814 1815 1816 1817 | int flags /* table flags */ ){ size_t i = 0, col = 0; size_t beg, end, total = 0; struct Blob *cells = new_work_buffer(rndr); int align; | | | 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 | int flags /* table flags */ ){ size_t i = 0, col = 0; size_t beg, end, total = 0; struct Blob *cells = new_work_buffer(rndr); int align; /* skip leading blanks and separator */ while( i<size && (data[i]==' ' || data[i]=='\t') ){ i++; } if( i<size && data[i]=='|' ) i++; /* go over all the cells */ while( i<size && total==0 ){ /* check optional left/center align marker */ align = 0; |
︙ | ︙ | |||
2028 2029 2030 2031 2032 2033 2034 | /* is_ref -- returns whether a line is a reference or not */ static int is_ref( char *data, /* input text */ size_t beg, /* offset of the beginning of the line */ size_t end, /* offset of the end of the text */ size_t *last, /* last character of the link */ | | | 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 | /* is_ref -- returns whether a line is a reference or not */ static int is_ref( char *data, /* input text */ size_t beg, /* offset of the beginning of the line */ size_t end, /* offset of the end of the text */ size_t *last, /* last character of the link */ struct Blob *refs /* array of link references */ ){ size_t i = 0; size_t id_offset, id_end; size_t link_offset, link_end; size_t title_offset, title_end; size_t line_end; struct link_ref lr = { |
︙ | ︙ |
Changes to src/markdown.md.
|
| | < < < < < | < < | < < | < < < | < < < | < < | < < | < | < | < < < < | < < < < < < < < < < < < < < < < < < < < | < | < | < | < < | < | < < < | < | < < < < | < < < | < | < | < | < < | < < < < < < < < < < < < < < < < < < < < < < < | < | < < | < | < < | < | < | < < < | < < < < < < < < < < < < < < | < < < < | < < | < < < | < | < | < < < | < < < | < < | < < | < | < < | < < | < < < | < < | < < < | < | < < | < | < < < < < < < < < < < < < < < < < | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 | # Markdown Overview # ## Paragraphs ## > Paragraphs are divided by blank lines. ## Headings ## > # Top-level Heading Alternative Top Level Heading ============================= > ## Second-level Heading Alternative 2nd Level Heading ----------------------------- ## Links ## > 1. **\[display text\]\(URL\)** > 2. **\[display text\]\(URL "Title"\)** > 3. **\<URL\>** > 4. **\[display text\]\[label\]** > With link format 4 ("reference links") the label must be resolved by > including a line of the form **\[label\]: URL** or > **\[label\]: URL "Title"** somewhere else > in the document. ## Fonts ## > * _\*italic\*_ > * *\_italic\_* > * __\*\*bold\*\*__ > * **\_\_bold\_\_** > * `` `code` `` > Note that the \`...\` construct disables HTML markup, so one can write, > for example, **\``<html>`\`** to yield **`<html>`**. ## Lists ## > * bullet item + bullet item - bullet item 1. numbered item ## Block Quotes ## > Begin each line of a paragraph with ">" to block quote that paragraph. > > > This paragraph is indented > > > > Double-indented paragraph ## Miscellaneous ## > * In-line images using **\!\[alt-text\]\(image-URL\)** > * Use HTML for complex formatting issues. > * Escape special characters (ex: "\[", "\(", "\*") > using backslash (ex: "\\\[", "\\\(", "\\\*"). > * See [daringfireball.net](http://daringfireball.net/projects/markdown/syntax) > for additional information. ## Special Features For Fossil ## > * In hyperlinks, if the URL begins with "/" then the root of the Fossil > repository is prepended. This allows for repository-relative hyperlinks. > * For documents that begin with top-level heading (ex: "# heading #"), the > heading is omitted from the body of the document and becomes the document > title displayed at the top of the Fossil page. |
Changes to src/markdown_html.c.
︙ | ︙ | |||
70 71 72 73 74 75 76 77 78 79 80 81 82 83 | BLOB_APPEND_LITERAL(ob, "<"); }else if( data[i]=='>' ){ BLOB_APPEND_LITERAL(ob, ">"); }else if( data[i]=='&' ){ BLOB_APPEND_LITERAL(ob, "&"); }else if( data[i]=='"' ){ BLOB_APPEND_LITERAL(ob, """); }else{ break; } i++; } } } | > > | 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 | BLOB_APPEND_LITERAL(ob, "<"); }else if( data[i]=='>' ){ BLOB_APPEND_LITERAL(ob, ">"); }else if( data[i]=='&' ){ BLOB_APPEND_LITERAL(ob, "&"); }else if( data[i]=='"' ){ BLOB_APPEND_LITERAL(ob, """); }else if( data[i]=='\'' ){ BLOB_APPEND_LITERAL(ob, "'"); }else{ break; } i++; } } } |
︙ | ︙ | |||
353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 | static int html_link( struct Blob *ob, struct Blob *link, struct Blob *title, struct Blob *content, void *opaque ){ BLOB_APPEND_LITERAL(ob, "<a href=\""); html_escape(ob, blob_buffer(link), blob_size(link)); if( title && blob_size(title)>0 ){ BLOB_APPEND_LITERAL(ob, "\" title=\""); html_escape(ob, blob_buffer(title), blob_size(title)); } BLOB_APPEND_LITERAL(ob, "\">"); BLOB_APPEND_BLOB(ob, content); | > > > > > > | 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 | static int html_link( struct Blob *ob, struct Blob *link, struct Blob *title, struct Blob *content, void *opaque ){ char *zLink = blob_buffer(link); BLOB_APPEND_LITERAL(ob, "<a href=\""); if( zLink && zLink[0]=='/' && g.zTop ){ /* For any hyperlink that begins with "/", make it refer to the root ** of the Fossil repository */ blob_append(ob, g.zTop, -1); } html_escape(ob, blob_buffer(link), blob_size(link)); if( title && blob_size(title)>0 ){ BLOB_APPEND_LITERAL(ob, "\" title=\""); html_escape(ob, blob_buffer(title), blob_size(title)); } BLOB_APPEND_LITERAL(ob, "\">"); BLOB_APPEND_BLOB(ob, content); |
︙ | ︙ |
Changes to src/md5.c.
︙ | ︙ | |||
420 421 422 423 424 425 426 | DigestToBase16(zResult, blob_buffer(pCksum)); return 0; } /* ** COMMAND: md5sum* | | | 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 | DigestToBase16(zResult, blob_buffer(pCksum)); return 0; } /* ** COMMAND: md5sum* ** ** Usage: %fossil md5sum FILES.... ** ** Compute an MD5 checksum of all files named on the command-line. ** If a file is named "-" then content is read from standard input. */ void md5sum_test(void){ int i; |
︙ | ︙ |
Changes to src/merge.c.
︙ | ︙ | |||
211 212 213 214 215 216 217 | ** -n|--dry-run If given, display instead of run actions ** ** -v|--verbose Show additional details of the merge */ void merge_cmd(void){ int vid; /* Current version "V" */ int mid; /* Version we are merging from "M" */ | | | 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 | ** -n|--dry-run If given, display instead of run actions ** ** -v|--verbose Show additional details of the merge */ void merge_cmd(void){ int vid; /* Current version "V" */ int mid; /* Version we are merging from "M" */ int pid = 0; /* The pivot version - most recent common ancestor P */ int nid = 0; /* The name pivot version "N" */ int verboseFlag; /* True if the -v|--verbose option is present */ int integrateFlag; /* True if the --integrate option is present */ int pickFlag; /* True if the --cherrypick option is present */ int backoutFlag; /* True if the --backout option is present */ int dryRunFlag; /* True if the --dry-run or -n option is present */ int forceFlag; /* True if the --force or -f option is present */ |
︙ | ︙ | |||
263 264 265 266 267 268 269 | if( zBinGlob==0 ) zBinGlob = db_get("binary-glob",0); vid = db_lget_int("checkout", 0); if( vid==0 ){ fossil_fatal("nothing is checked out"); } if( !dryRunFlag ){ if( autosync_loop(SYNC_PULL + SYNC_VERBOSE*verboseFlag, | | | | 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 | if( zBinGlob==0 ) zBinGlob = db_get("binary-glob",0); vid = db_lget_int("checkout", 0); if( vid==0 ){ fossil_fatal("nothing is checked out"); } if( !dryRunFlag ){ if( autosync_loop(SYNC_PULL + SYNC_VERBOSE*verboseFlag, db_get_int("autosync-tries", 1), 1) ){ fossil_fatal("merge abandoned due to sync failure"); } } /* Find mid, the artifactID of the version to be merged into the current ** check-out */ if( g.argc==3 ){ /* Mid is specified as an argument on the command-line */ |
︙ | ︙ | |||
393 394 395 396 397 398 399 | if( load_vfile_from_rid(pid) && !forceMissingFlag ){ fossil_fatal("missing content, unable to merge"); } if( zPivot ){ vAncestor = db_exists( "WITH RECURSIVE ancestor(id) AS (" " VALUES(%d)" | | | 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 | if( load_vfile_from_rid(pid) && !forceMissingFlag ){ fossil_fatal("missing content, unable to merge"); } if( zPivot ){ vAncestor = db_exists( "WITH RECURSIVE ancestor(id) AS (" " VALUES(%d)" " UNION" " SELECT pid FROM plink, ancestor" " WHERE cid=ancestor.id AND pid!=%d AND cid!=%d)" "SELECT 1 FROM ancestor WHERE id=%d LIMIT 1", vid, nid, pid, pid ) ? 'p' : 'n'; } if( debugFlag ){ |
︙ | ︙ |
Changes to src/merge3.c.
︙ | ︙ | |||
314 315 316 317 318 319 320 | int i, j; int len = (int)strlen(mergeMarker[0]); const char *z = blob_buffer(p); int n = blob_size(p) - len + 1; assert( len==(int)strlen(mergeMarker[1]) ); assert( len==(int)strlen(mergeMarker[2]) ); assert( len==(int)strlen(mergeMarker[3]) ); | | | 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 | int i, j; int len = (int)strlen(mergeMarker[0]); const char *z = blob_buffer(p); int n = blob_size(p) - len + 1; assert( len==(int)strlen(mergeMarker[1]) ); assert( len==(int)strlen(mergeMarker[2]) ); assert( len==(int)strlen(mergeMarker[3]) ); assert( count(mergeMarker)==4 ); for(i=0; i<n; ){ for(j=0; j<4; j++){ if( memcmp(&z[i], mergeMarker[j], len)==0 ) return 1; } while( i<n && z[i]!='\n' ){ i++; } while( i<n && z[i]=='\n' ){ i++; } } |
︙ | ︙ | |||
374 375 376 377 378 379 380 | /* We should be done with options.. */ verify_all_options(); if( g.argc!=6 ){ usage("PIVOT V1 V2 MERGED"); } if( blob_read_from_file(&pivot, g.argv[2])<0 ){ | | | | | | 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 | /* We should be done with options.. */ verify_all_options(); if( g.argc!=6 ){ usage("PIVOT V1 V2 MERGED"); } if( blob_read_from_file(&pivot, g.argv[2])<0 ){ fossil_fatal("cannot read %s", g.argv[2]); } if( blob_read_from_file(&v1, g.argv[3])<0 ){ fossil_fatal("cannot read %s", g.argv[3]); } if( blob_read_from_file(&v2, g.argv[4])<0 ){ fossil_fatal("cannot read %s", g.argv[4]); } blob_merge(&pivot, &v1, &v2, &merged); if( blob_write_to_file(&merged, g.argv[5])<blob_size(&merged) ){ fossil_fatal("cannot write %s", g.argv[4]); } blob_reset(&pivot); blob_reset(&v1); blob_reset(&v2); blob_reset(&merged); } |
︙ | ︙ |
Changes to src/mkindex.c.
︙ | ︙ | |||
20 21 22 23 24 25 26 27 | ** routine collects information about these entry points and then ** generates (on standard output) C code used by Fossil to dispatch ** to those entry points. ** ** The source code is scanned for comment lines of the form: ** ** WEBPAGE: /abc/xyz ** | > | | > | > > > > > > > > | > > > < > | < < | | < < < | > | | < > > > > > > > > > > > | | | | | > | | | > > > > > > > > > > > > > > > > > > > > > > > | | | > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | | | | | | | | | | | > > | < < | < < > > > > > > > > > > > > < < < < < < < < < < < < < < < < < < < < < < < < | < | < < < < < < < < < < < < < < < < < < < < < < < < < < < | | | | | | | | | | | | | | | | < | | < < < < > | | > | | < | > > > > > > > > > > | | | | 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 | ** routine collects information about these entry points and then ** generates (on standard output) C code used by Fossil to dispatch ** to those entry points. ** ** The source code is scanned for comment lines of the form: ** ** WEBPAGE: /abc/xyz ** COMMAND: cmdname ** ** These comment should be followed by a function definition of the ** form: ** ** void function_name(void){ ** ** This routine creates C source code for a constant table that maps ** command and webpage name into pointers to the function. ** ** Command names can divided into three classes: 1st-tier, 2nd-tier, ** and test. 1st-tier commands are the most frequently used and the ** ones that show up with "fossil help". 2nd-tier are seldom-used and/or ** legacy command. Test commands are unsupported commands used for testing ** and analysis only. ** ** Commands are 1st-tier by default. If the command name begins with ** "test-" or if the command name has a "test" argument, then it becomes ** a test command. If the command name has a "2nd-tier" argument or ends ** with a "*" character, it is second tier. Examples: ** ** COMMAND: abcde* ** COMMAND: fghij 2nd-tier ** COMMAND: test-xyzzy ** COMMAND: xyzzy test ** ** New arguments may be added in future releases that set additional ** bits in the eCmdFlags field. ** ** Additional lines of comment after the COMMAND: or WEBPAGE: become ** the built-in help text for that command or webpage. ** ** Multiple COMMAND: entries can be attached to the same command, thus ** creating multiple aliases for that command. Similarly, multiple ** WEBPAGE: entries can be attached to the same webpage function, to give ** that page aliases. */ #include <stdio.h> #include <stdlib.h> #include <assert.h> #include <string.h> /*************************************************************************** ** These macros must match similar macros in dispatch.c. ** ** Allowed values for CmdOrPage.eCmdFlags. */ #define CMDFLAG_1ST_TIER 0x0001 /* Most important commands */ #define CMDFLAG_2ND_TIER 0x0002 /* Obscure and seldom used commands */ #define CMDFLAG_TEST 0x0004 /* Commands for testing only */ #define CMDFLAG_WEBPAGE 0x0008 /* Web pages */ #define CMDFLAG_COMMAND 0x0010 /* A command */ /**************************************************************************/ /* ** Each entry looks like this: */ typedef struct Entry { int eType; /* CMDFLAG_* values */ char *zIf; /* Enclose in #if */ char *zFunc; /* Name of implementation */ char *zPath; /* Webpage or command name */ char *zHelp; /* Help text */ int iHelp; /* Index of Help text */ } Entry; /* ** Maximum number of entries */ #define N_ENTRY 5000 /* ** Maximum size of a help message */ #define MX_HELP 250000 /* ** Table of entries */ Entry aEntry[N_ENTRY]; /* ** Current help message accumulator */ char zHelp[MX_HELP]; int nHelp; /* ** Most recently encountered #if */ char zIf[2000]; /* ** How many entries are used */ int nUsed; int nFixed; /* ** Current filename and line number */ char *zFile; int nLine; /* ** Number of errors */ int nErr = 0; /* ** Duplicate N characters of a string. */ char *string_dup(const char *zSrc, int n){ char *z; if( n<0 ) n = strlen(zSrc); z = malloc( n+1 ); if( z==0 ){ fprintf(stderr,"Out of memory!\n"); exit(1); } strncpy(z, zSrc, n); z[n] = 0; return z; } /* ** Safe isspace macro. Works with signed characters. */ int fossil_isspace(char c){ return c==' ' || (c<='\r' && c>='\t'); } /* ** Safe isident macro. Works with signed characters. */ int fossil_isident(char c){ if( c>='a' && c<='z' ) return 1; if( c>='A' && c<='Z' ) return 1; if( c>='0' && c<='9' ) return 1; if( c=='_' ) return 1; return 0; } /* ** Scan a line looking for comments containing zLabel. Make ** new entries if found. */ void scan_for_label(const char *zLabel, char *zLine, int eType){ int i, j; int len = strlen(zLabel); if( nUsed>=N_ENTRY ) return; for(i=0; fossil_isspace(zLine[i]) || zLine[i]=='*'; i++){} if( zLine[i]!=zLabel[0] ) return; if( strncmp(&zLine[i],zLabel, len)==0 ){ i += len; }else{ return; } while( fossil_isspace(zLine[i]) ){ i++; } if( zLine[i]=='/' ) i++; for(j=0; zLine[i+j] && !fossil_isspace(zLine[i+j]); j++){} aEntry[nUsed].eType = eType; if( eType & CMDFLAG_WEBPAGE ){ aEntry[nUsed].zPath = string_dup(&zLine[i-1], j+1); aEntry[nUsed].zPath[0] = '/'; }else{ aEntry[nUsed].zPath = string_dup(&zLine[i], j); } aEntry[nUsed].zFunc = 0; if( (eType & CMDFLAG_COMMAND)!=0 ){ if( strncmp(&zLine[i], "test-", 5)==0 ){ /* Commands that start with "test-" are test-commands */ aEntry[nUsed].eType |= CMDFLAG_TEST; }else if( zLine[i+j-1]=='*' ){ /* If the command name ends in '*', remove the '*' from the name ** but move the command into the second tier */ aEntry[nUsed].zPath[j-1] = 0; aEntry[nUsed].eType |= CMDFLAG_2ND_TIER; }else{ /* Otherwise, this is a first-tier command */ aEntry[nUsed].eType |= CMDFLAG_1ST_TIER; } } /* Process additional flags that might follow the command name */ while( zLine[i+j]!=0 ){ i += j; while( fossil_isspace(zLine[i]) ){ i++; } if( zLine[i]==0 ) break; for(j=0; zLine[i+j] && !fossil_isspace(zLine[i+j]); j++){} if( j==8 && strncmp(&zLine[i], "1st-tier", j)==0 ){ aEntry[nUsed].eType &= ~(CMDFLAG_2ND_TIER|CMDFLAG_TEST); aEntry[nUsed].eType |= CMDFLAG_1ST_TIER; }else if( j==8 && strncmp(&zLine[i], "2nd-tier", j)==0 ){ aEntry[nUsed].eType &= ~(CMDFLAG_1ST_TIER|CMDFLAG_TEST); aEntry[nUsed].eType |= CMDFLAG_2ND_TIER; }else if( j==4 && strncmp(&zLine[i], "test", j)==0 ){ aEntry[nUsed].eType &= ~(CMDFLAG_1ST_TIER|CMDFLAG_2ND_TIER); aEntry[nUsed].eType |= CMDFLAG_TEST; }else{ fprintf(stderr, "%s:%d: unknown option: '%.*s'\n", zFile, nLine, j, &zLine[i]); nErr++; } } nUsed++; } /* ** Check to see if the current line is an #if and if it is, add it to ** the zIf[] string. If the current line is an #endif or #else or #elif ** then cancel the current zIf[] string. */ void scan_for_if(const char *zLine){ int i; int len; if( zLine[0]!='#' ) return; for(i=1; fossil_isspace(zLine[i]); i++){} if( zLine[i]==0 ) return; len = strlen(&zLine[i]); if( strncmp(&zLine[i],"if",2)==0 ){ zIf[0] = '#'; memcpy(&zIf[1], &zLine[i], len+1); }else if( zLine[i]=='e' ){ zIf[0] = 0; } } /* ** Scan a line for a function that implements a web page or command. */ void scan_for_func(char *zLine){ int i,j,k; char *z; if( nUsed<=nFixed ) return; if( strncmp(zLine, "**", 2)==0 && fossil_isspace(zLine[2]) && strlen(zLine)<sizeof(zHelp)-nHelp-1 && nUsed>nFixed && strncmp(zLine,"** COMMAND:",11)!=0 && strncmp(zLine,"** WEBPAGE:",11)!=0 ){ if( zLine[2]=='\n' ){ zHelp[nHelp++] = '\n'; }else{ if( strncmp(&zLine[3], "Usage: ", 6)==0 ) nHelp = 0; strcpy(&zHelp[nHelp], &zLine[3]); nHelp += strlen(&zHelp[nHelp]); } return; } for(i=0; fossil_isspace(zLine[i]); i++){} if( zLine[i]==0 ) return; if( strncmp(&zLine[i],"void",4)!=0 ){ if( zLine[i]!='*' ) goto page_skip; return; } i += 4; if( !fossil_isspace(zLine[i]) ) goto page_skip; while( fossil_isspace(zLine[i]) ){ i++; } for(j=0; fossil_isident(zLine[i+j]); j++){} if( j==0 ) goto page_skip; for(k=nHelp-1; k>=0 && fossil_isspace(zHelp[k]); k--){} nHelp = k+1; zHelp[nHelp] = 0; for(k=0; k<nHelp && fossil_isspace(zHelp[k]); k++){} if( k<nHelp ){ z = string_dup(&zHelp[k], nHelp-k); }else{ z = ""; } for(k=nFixed; k<nUsed; k++){ aEntry[k].zIf = zIf[0] ? string_dup(zIf, -1) : 0; aEntry[k].zFunc = string_dup(&zLine[i], j); aEntry[k].zHelp = z; z = 0; aEntry[k].iHelp = nFixed; } i+=j; while( fossil_isspace(zLine[i]) ){ i++; } if( zLine[i]!='(' ) goto page_skip; nFixed = nUsed; nHelp = 0; return; page_skip: for(i=nFixed; i<nUsed; i++){ fprintf(stderr,"%s:%d: skipping page \"%s\"\n", zFile, nLine, aEntry[i].zPath); } nUsed = nFixed; } /* ** Compare two entries */ int e_compare(const void *a, const void *b){ const Entry *pA = (const Entry*)a; const Entry *pB = (const Entry*)b; return strcmp(pA->zPath, pB->zPath); } /* ** Build the binary search table. */ void build_table(void){ int i; int nWeb = 0; qsort(aEntry, nFixed, sizeof(aEntry[0]), e_compare); printf( "/* Automatically generated code\n" "** DO NOT EDIT!\n" "**\n" "** This file was generated by the mkindex.exe program based on\n" "** comments in other Fossil source files.\n" "*/\n" ); /* Output declarations for all the action functions */ for(i=0; i<nFixed; i++){ if( aEntry[i].zIf ) printf("%s", aEntry[i].zIf); printf("extern void %s(void);\n", aEntry[i].zFunc); if( aEntry[i].zIf ) printf("#endif\n"); } /* Output strings for all the help text */ for(i=0; i<nFixed; i++){ char *z = aEntry[i].zHelp; if( z==0 ) continue; if( aEntry[i].zIf ) printf("%s", aEntry[i].zIf); printf("static const char zHelp%03d[] = \n", aEntry[i].iHelp); printf(" \""); while( *z ){ if( *z=='\n' ){ printf("\\n\"\n \""); }else if( *z=='"' ){ printf("\\\""); }else{ putchar(*z); } z++; } printf("\";\n"); if( aEntry[i].zIf ) printf("#endif\n"); } /* Generate the aCommand[] table */ printf("static const CmdOrPage aCommand[] = {\n"); for(i=0; i<nFixed; i++){ const char *z = aEntry[i].zPath; int n = strlen(z); if( aEntry[i].zIf ){ printf("%s", aEntry[i].zIf); }else if( (aEntry[i].eType & CMDFLAG_WEBPAGE)!=0 ){ nWeb++; } printf(" { \"%.*s\",%*s%s,%*szHelp%03d, 0x%02x },\n", n, z, 25-n, "", aEntry[i].zFunc, (int)(30-strlen(aEntry[i].zFunc)), "", aEntry[i].iHelp, aEntry[i].eType ); if( aEntry[i].zIf ) printf("#endif\n"); } printf("};\n"); printf("#define FOSSIL_FIRST_CMD %d\n", nWeb); } /* ** Process a single file of input */ void process_file(void){ FILE *in = fopen(zFile, "r"); char zLine[2000]; if( in==0 ){ fprintf(stderr,"%s: cannot open\n", zFile); return; } nLine = 0; while( fgets(zLine, sizeof(zLine), in) ){ nLine++; scan_for_if(zLine); scan_for_label("WEBPAGE:",zLine,CMDFLAG_WEBPAGE); scan_for_label("COMMAND:",zLine,CMDFLAG_COMMAND); scan_for_func(zLine); } fclose(in); nUsed = nFixed; } int main(int argc, char **argv){ int i; for(i=1; i<argc; i++){ zFile = argv[i]; process_file(); } build_table(); return nErr; } |
Changes to src/mkversion.c.
︙ | ︙ | |||
62 63 64 65 66 67 68 69 70 71 72 73 74 75 | } for(z=vx; z[0]=='0'; z++){} printf("#define RELEASE_VERSION_NUMBER %s\n", z); memset(vx,0,sizeof(vx)); strcpy(vx,b); d = 0; for(z=vx; z[0]; z++){ if( z[0]!='.' ) continue; if ( d<3 ){ z[0] = ','; d++; }else{ z[0] = '\0'; break; | > > > > | 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 | } for(z=vx; z[0]=='0'; z++){} printf("#define RELEASE_VERSION_NUMBER %s\n", z); memset(vx,0,sizeof(vx)); strcpy(vx,b); d = 0; for(z=vx; z[0]; z++){ if( z[0]=='-' ){ z[0] = 0; break; } if( z[0]!='.' ) continue; if ( d<3 ){ z[0] = ','; d++; }else{ z[0] = '\0'; break; |
︙ | ︙ |
Changes to src/moderate.c.
︙ | ︙ | |||
24 25 26 27 28 29 30 | /* ** Create a table to represent pending moderation requests, if the ** table does not already exist. */ void moderation_table_create(void){ db_multi_exec( | | | | 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 | /* ** Create a table to represent pending moderation requests, if the ** table does not already exist. */ void moderation_table_create(void){ db_multi_exec( "CREATE TABLE IF NOT EXISTS repository.modreq(\n" " objid INTEGER PRIMARY KEY,\n" /* Record pending approval */ " attachRid INT,\n" /* Object attached */ " tktid TEXT\n" /* Associated ticket id */ ");\n" ); } /* ** Return TRUE if the modreq table exists */ int moderation_table_exists(void){ |
︙ | ︙ | |||
65 66 67 68 69 70 71 | "modreq", "attachRid", "mlink", "mid", "mlink", "fid", "tagxref", "srcid", "tagxref", "rid", }; int i; | | | 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 | "modreq", "attachRid", "mlink", "mid", "mlink", "fid", "tagxref", "srcid", "tagxref", "rid", }; int i; for(i=0; i<count(aTabField); i+=2){ if( db_exists("SELECT 1 FROM \"%w\" WHERE \"%w\"=%d", aTabField[i], aTabField[i+1], rid) ) return 1; } return 0; } /* |
︙ | ︙ |
Changes to src/name.c.
︙ | ︙ | |||
331 332 333 334 335 336 337 | /* ** This routine is similar to name_to_uuid() except in the form it ** takes its parameters and returns its value, and in that it does not ** treat errors as fatal. zName must be a UUID, as described for ** name_to_uuid(). zType is also as described for that function. If ** zName does not resolve, 0 is returned. If it is ambiguous, a ** negative value is returned. On success the rid is returned and | | | 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 | /* ** This routine is similar to name_to_uuid() except in the form it ** takes its parameters and returns its value, and in that it does not ** treat errors as fatal. zName must be a UUID, as described for ** name_to_uuid(). zType is also as described for that function. If ** zName does not resolve, 0 is returned. If it is ambiguous, a ** negative value is returned. On success the rid is returned and ** pUuid (if it is not NULL) is set to a newly-allocated string, ** the full UUID, which must eventually be free()d by the caller. */ int name_to_uuid2(const char *zName, const char *zType, char **pUuid){ int rid = symbolic_name_to_rid(zName, zType); if((rid>0) && pUuid){ *pUuid = db_text(NULL, "SELECT uuid FROM blob WHERE rid=%d", rid); } |
︙ | ︙ | |||
663 664 665 666 667 668 669 | comment_print(db_column_text(&q,1), 0, 12, -1, g.comFmtFlags); } db_finalize(&q); } /* ** COMMAND: whatis* | | | 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 | comment_print(db_column_text(&q,1), 0, 12, -1, g.comFmtFlags); } db_finalize(&q); } /* ** COMMAND: whatis* ** ** Usage: %fossil whatis NAME ** ** Resolve the symbol NAME into its canonical 40-character SHA1-hash ** artifact name and provide a description of what role that artifact ** plays. ** ** Options: |
︙ | ︙ | |||
719 720 721 722 723 724 725 | whatis_rid(rid, verboseFlag); } } } /* ** COMMAND: test-whatis-all | | | | 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 | whatis_rid(rid, verboseFlag); } } } /* ** COMMAND: test-whatis-all ** ** Usage: %fossil test-whatis-all ** ** Show "whatis" information about every artifact in the repository */ void test_whatis_all_cmd(void){ Stmt q; int cnt = 0; db_find_and_open_repository(0,0); db_prepare(&q, "SELECT rid FROM blob ORDER BY rid"); while( db_step(&q)==SQLITE_ROW ){ if( cnt++ ) fossil_print("%.79c\n", '-'); whatis_rid(db_column_int(&q,0), 1); } db_finalize(&q); } /* ** COMMAND: test-ambiguous ** ** Usage: %fossil test-ambiguous [--minsize N] ** ** Show a list of ambiguous SHA1-hash abbreviations of N characters or ** more where N defaults to 4. Change N to a different value using ** the "--minsize N" command-line option. */ void test_ambiguous_cmd(void){ |
︙ | ︙ | |||
932 933 934 935 936 937 938 | /* Mark private elements */ db_multi_exec( "UPDATE description SET isPrivate=1 WHERE rid IN private" ); } /* | | > > > > > | | | 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 | /* Mark private elements */ db_multi_exec( "UPDATE description SET isPrivate=1 WHERE rid IN private" ); } /* ** Print the content of the description table on stdout. ** ** The description table is computed using the WHERE clause zWhere if ** the zWhere parameter is not NULL. If zWhere is NULL, then this ** routine assumes that the description table already exists and is ** populated and merely prints the contents. */ int describe_artifacts_to_stdout(const char *zWhere, const char *zLabel){ Stmt q; int cnt = 0; if( zWhere!=0 ) describe_artifacts(zWhere); db_prepare(&q, "SELECT uuid, summary, isPrivate\n" " FROM description\n" " ORDER BY ctime, type;" ); while( db_step(&q)==SQLITE_ROW ){ if( zLabel ){ fossil_print("%s\n", zLabel); zLabel = 0; } fossil_print(" %.16s %s", db_column_text(&q,0), db_column_text(&q,1)); if( db_column_int(&q,2) ) fossil_print(" (unpublished)"); fossil_print("\n"); cnt++; } db_finalize(&q); if( zWhere!=0 ) db_multi_exec("DELETE FROM description;"); return cnt; } /* ** COMMAND: test-describe-artifacts ** ** Usage: %fossil test-describe-artifacts [--from S] [--count N] |
︙ | ︙ | |||
999 1000 1001 1002 1003 1004 1005 | int mx = db_int(0, "SELECT max(rid) FROM blob"); int unpubOnly = PB("unpub"); char *zRange; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } style_header("List Of Artifacts"); | | | | 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 | int mx = db_int(0, "SELECT max(rid) FROM blob"); int unpubOnly = PB("unpub"); char *zRange; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } style_header("List Of Artifacts"); style_submenu_element("250 Largest", "bigbloblist"); if( !unpubOnly && mx>n && P("s")==0 ){ int i; @ <p>Select a range of artifacts to view:</p> @ <ul> for(i=1; i<=mx; i+=n){ @ <li> %z(href("%R/bloblist?s=%d&n=%d",i,n)) @ %d(i)..%d(i+n-1<mx?i+n-1:mx)</a> } @ </ul> style_footer(); return; } if( !unpubOnly && mx>n ){ style_submenu_element("Index", "bloblist"); } if( unpubOnly ){ zRange = mprintf("IN private"); }else{ zRange = mprintf("BETWEEN %d AND %d", s, s+n-1); } describe_artifacts(zRange); |
︙ | ︙ | |||
1197 1198 1199 1200 1201 1202 1203 | } for(j=0; j<aCollide[i].cnt && j<MAX_COLLIDE; j++){ char *zId = aCollide[i].azHit[j]; if( zId==0 ) continue; @ %z(href("%R/whatis/%s",zId))%h(zId)</a> } } | | | | | 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 | } for(j=0; j<aCollide[i].cnt && j<MAX_COLLIDE; j++){ char *zId = aCollide[i].azHit[j]; if( zId==0 ) continue; @ %z(href("%R/whatis/%s",zId))%h(zId)</a> } } for(i=4; i<count(aCollide); i++){ for(j=0; j<aCollide[i].cnt && j<MAX_COLLIDE; j++){ fossil_free(aCollide[i].azHit[j]); } } } /* ** WEBPAGE: hash-collisions ** ** Show the number of hash collisions for hash prefixes of various lengths. */ void hash_collisions_webpage(void){ login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } style_header("SHA1 Prefix Collisions"); style_submenu_element("Activity Reports", "reports"); style_submenu_element("Stats", "stat"); @ <h1>Hash Prefix Collisions on Check-ins</h1> collision_report("SELECT (SELECT uuid FROM blob WHERE rid=objid)" " FROM event WHERE event.type='ci'" " ORDER BY 1"); @ <h1>Hash Prefix Collisions on All Artifacts</h1> collision_report("SELECT uuid FROM blob ORDER BY 1"); style_footer(); } |
Changes to src/piechart.c.
︙ | ︙ | |||
240 241 242 243 244 245 246 | }else{ if( y4<rLwrLeft ){ y4 = rLwrLeft; } rLwrLeft = y4 + TEXT_HEIGHT; } } | | | | < > | | > > > > > > > > | | < < > > > > | < > < | | | | | | | < < | | | | | | | < < | | 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 | }else{ if( y4<rLwrLeft ){ y4 = rLwrLeft; } rLwrLeft = y4 + TEXT_HEIGHT; } } if( x4<cx ){ x5 = x4 - 1.0; zAnc = "end"; }else{ x5 = x4 + 1.0; zAnc = "start"; } y5 = y4 - 3.0 + 6.0*(1.0 - p->rCos); @ <line stroke-width='1' stroke='%s(zFg)' class='piechartLine' @ x1='%g(x3)' y1='%g(y3)' x2='%g(x4)' y2='%g(y4)'/> @ <text text-anchor="%s(zAnc)" fill='%s(zFg)' class="piechartLabel" @ x='%g(x5)' y='%g(y5)'>%h(p->z)</text> fossil_free(p->z); } db_finalize(&q); fossil_free(aWedge); } /* ** WEBPAGE: test-piechart ** ** Generate a pie-chart based on data input from a form. */ void piechart_test_page(void){ const char *zData; Stmt ins; int n = 0; int width; int height; int i, j; login_check_credentials(); style_header("Pie Chart Test"); db_multi_exec("CREATE TEMP TABLE piechart(amt REAL, label TEXT);"); db_prepare(&ins, "INSERT INTO piechart(amt,label) VALUES(:amt,:label)"); zData = PD("data",""); width = atoi(PD("width","800")); height = atoi(PD("height","400")); i = 0; while( zData[i] ){ double rAmt; char *zLabel; while( fossil_isspace(zData[i]) ){ i++; } j = i; while( fossil_isdigit(zData[j]) ){ j++; } if( zData[j]=='.' ){ j++; while( fossil_isdigit(zData[j]) ){ j++; } } if( i==j ) break; rAmt = atof(&zData[i]); i = j; while( zData[i]==',' || fossil_isspace(zData[i]) ){ i++; } n++; zLabel = mprintf("label%02d-%g", n, rAmt); db_bind_double(&ins, ":amt", rAmt); db_bind_text(&ins, ":label", zLabel); db_step(&ins); db_reset(&ins); fossil_free(zLabel); } db_finalize(&ins); if( n>1 ){ @ <svg width=%d(width) height=%d(height) style="border:1px solid #d3d3d3;"> piechart_render(width,height, PIE_OTHER|PIE_PERCENT); @ </svg> @ <hr /> } @ <form method="POST" action='%R/test-piechart'> @ <p>Comma-separated list of slice widths:<br /> @ <input type='text' name='data' size='80' value='%h(zData)'/><br /> @ Width: <input type='text' size='8' name='width' value='%d(width)'/> @ Height: <input type='text' size='8' name='height' value='%d(height)'/><br /> @ <input type='submit' value='Draw The Pie Chart'/> @ </form> @ <p>Interesting test cases: @ <ul> @ <li> <a href='test-piechart?data=44,2,2,2,2,2,3,2,2,2,2,2,44'>Case 1</a> @ <li> <a href='test-piechart?data=2,2,2,2,2,44,44,2,2,2,2,2'>Case 2</a> @ <li> <a href='test-piechart?data=20,2,2,2,2,2,2,2,2,2,2,80'>Case 3</a> @ <li> <a href='test-piechart?data=80,2,2,2,2,2,2,2,2,2,2,20'>Case 4</a> @ <li> <a href='test-piechart?data=2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2'>Case 5</a> @ </ul> style_footer(); } |
Changes to src/printf.c.
︙ | ︙ | |||
156 157 158 159 160 161 162 | { 'G', 0, 1, etGENERIC, 14, 0 }, { 'i', 10, 1, etRADIX, 0, 0 }, { 'n', 0, 0, etSIZE, 0, 0 }, { '%', 0, 0, etPERCENT, 0, 0 }, { 'p', 16, 0, etPOINTER, 0, 1 }, { '/', 0, 0, etPATH, 0, 0 }, }; | | | 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 | { 'G', 0, 1, etGENERIC, 14, 0 }, { 'i', 10, 1, etRADIX, 0, 0 }, { 'n', 0, 0, etSIZE, 0, 0 }, { '%', 0, 0, etPERCENT, 0, 0 }, { 'p', 16, 0, etPOINTER, 0, 1 }, { '/', 0, 0, etPATH, 0, 0 }, }; #define etNINFO count(fmtinfo) /* ** "*val" is a double such that 0.1 <= *val < 10.0 ** Return the ascii code for the leading digit of *val, then ** multiply "*val" by 10.0 to renormalize. ** ** Example: |
︙ | ︙ | |||
873 874 875 876 877 878 879 | static int stdoutAtBOL = 1; /* ** Write to standard output or standard error. ** ** On windows, transform the output into the current terminal encoding ** if the output is going to the screen. If output is redirected into | | > > > > > > > < | > | > > | 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 | static int stdoutAtBOL = 1; /* ** Write to standard output or standard error. ** ** On windows, transform the output into the current terminal encoding ** if the output is going to the screen. If output is redirected into ** a file, no translation occurs. Switch output mode to binary to ** properly process line-endings, make sure to switch the mode back to ** text when done. ** No translation ever occurs on unix. */ void fossil_puts(const char *z, int toStdErr){ FILE* out = (toStdErr ? stderr : stdout); int n = (int)strlen(z); if( n==0 ) return; assert( toStdErr==0 || toStdErr==1 ); if( toStdErr==0 ) stdoutAtBOL = (z[n-1]=='\n'); #if defined(_WIN32) if( fossil_utf8_to_console(z, n, toStdErr) >= 0 ){ return; } fflush(out); _setmode(_fileno(out), _O_BINARY); #endif fwrite(z, 1, n, out); #if defined(_WIN32) fflush(out); _setmode(_fileno(out), _O_TEXT); #endif } /* ** Force the standard output cursor to move to the beginning ** of a line, if it is not there already. */ int fossil_force_newline(void){ |
︙ | ︙ | |||
968 969 970 971 972 973 974 | fprintf(out, "------------- %04d-%02d-%02d %02d:%02d:%02d UTC ------------\n", pNow->tm_year+1900, pNow->tm_mon+1, pNow->tm_mday+1, pNow->tm_hour, pNow->tm_min, pNow->tm_sec); va_start(ap, zFormat); vfprintf(out, zFormat, ap); fprintf(out, "\n"); va_end(ap); | | | 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 | fprintf(out, "------------- %04d-%02d-%02d %02d:%02d:%02d UTC ------------\n", pNow->tm_year+1900, pNow->tm_mon+1, pNow->tm_mday+1, pNow->tm_hour, pNow->tm_min, pNow->tm_sec); va_start(ap, zFormat); vfprintf(out, zFormat, ap); fprintf(out, "\n"); va_end(ap); for(i=0; i<count(azEnv); i++){ char *p; if( (p = fossil_getenv(azEnv[i]))!=0 ){ fprintf(out, "%s=%s\n", azEnv[i], p); fossil_path_free(p); }else if( (z = P(azEnv[i]))!=0 ){ fprintf(out, "%s=%s\n", azEnv[i], z); } |
︙ | ︙ |
Changes to src/purge.c.
︙ | ︙ | |||
20 21 22 23 24 25 26 | ** manages the graveyard of purged content. */ #include "config.h" #include "purge.h" #include <assert.h> /* | | | 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 | ** manages the graveyard of purged content. */ #include "config.h" #include "purge.h" #include <assert.h> /* ** SQL code used to initialize the schema of the graveyard. ** ** The purgeevent table contains one entry for each purge event. For each ** purge event, multiple artifacts might have been removed. Each removed ** artifact is stored as an entry in the purgeitem table. ** ** The purgeevent and purgeitem tables are not synced, even by the ** "fossil config" command. They exist only as a backup in case of a |
︙ | ︙ | |||
50 51 52 53 54 55 56 57 58 59 60 61 62 63 | @ isPrivate BOOLEAN, -- True if artifact was originally private @ sz INT NOT NULL, -- Uncompressed size of the purged artifact @ desc TEXT, -- Brief description of this artifact @ data BLOB -- Compressed artifact content @ ); ; /* ** This routine purges multiple artifacts from the repository, transfering ** those artifacts into the PURGEITEM table. ** ** Prior to invoking this routine, the caller must create a (TEMP) table ** named zTab that contains the RID of every artifact to be purged. ** | > > > > > > > > > | 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 | @ isPrivate BOOLEAN, -- True if artifact was originally private @ sz INT NOT NULL, -- Uncompressed size of the purged artifact @ desc TEXT, -- Brief description of this artifact @ data BLOB -- Compressed artifact content @ ); ; /* ** Flags for the purge_artifact_list() function. */ #if INTERFACE #define PURGE_MOVETO_GRAVEYARD 0x0001 /* Move artifacts in graveyard */ #define PURGE_EXPLAIN_ONLY 0x0002 /* Show what would have happened */ #define PURGE_PRINT_SUMMARY 0x0004 /* Print a summary report at end */ #endif /* ** This routine purges multiple artifacts from the repository, transfering ** those artifacts into the PURGEITEM table. ** ** Prior to invoking this routine, the caller must create a (TEMP) table ** named zTab that contains the RID of every artifact to be purged. ** |
︙ | ︙ | |||
79 80 81 82 83 84 85 | ** (h) BACKLINK ** (i) ATTACHMENT ** (j) TICKETCHNG ** (7) If any ticket artifacts were removed (6j) then rebuild the ** corresponding ticket entries. Possibly remove entries from ** the ticket table. ** | | | > > > > > > > > > > | 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 | ** (h) BACKLINK ** (i) ATTACHMENT ** (j) TICKETCHNG ** (7) If any ticket artifacts were removed (6j) then rebuild the ** corresponding ticket entries. Possibly remove entries from ** the ticket table. ** ** Steps 1-4 (saving the purged artifacts into the graveyard) are only ** undertaken if the moveToGraveyard flag is true. */ int purge_artifact_list( const char *zTab, /* TEMP table containing list of RIDS to be purged */ const char *zNote, /* Text of the purgeevent.pnotes field */ unsigned purgeFlags /* zero or more PURGE_* flags */ ){ int peid = 0; /* New purgeevent ID */ Stmt q; /* General-use prepared statement */ char *z; assert( g.repositoryOpen ); /* Main database must already be open */ db_begin_transaction(); z = sqlite3_mprintf("IN \"%w\"", zTab); describe_artifacts(z); sqlite3_free(z); describe_artifacts_to_stdout(0, 0); /* The explain-only flags causes this routine to list the artifacts ** that would have been purged but to not actually make any changes ** to the repository. */ if( purgeFlags & PURGE_EXPLAIN_ONLY ){ db_end_transaction(0); return 0; } /* Make sure we are not removing a manifest that is the baseline of some ** manifest that is being left behind. This step is not strictly necessary. ** is is just a safety check. */ if( purge_baseline_out_from_under_delta(zTab) ){ fossil_fatal("attempt to purge a baseline manifest without also purging " "all of its deltas"); |
︙ | ︙ | |||
119 120 121 122 123 124 125 | content_undelta(rid); verify_before_commit(rid); } db_finalize(&q); /* Construct the graveyard and copy the artifacts to be purged into the ** graveyard */ | | | | 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 | content_undelta(rid); verify_before_commit(rid); } db_finalize(&q); /* Construct the graveyard and copy the artifacts to be purged into the ** graveyard */ if( purgeFlags & PURGE_MOVETO_GRAVEYARD ){ db_multi_exec(zPurgeInit /*works-like:"%w%w"*/, "repository", "repository"); db_multi_exec( "INSERT INTO purgeevent(ctime,pnotes) VALUES(now(),%Q)", zNote ); peid = db_last_insert_rowid(); db_prepare(&q, "SELECT rid FROM delta WHERE rid IN \"%w\"" " AND srcid NOT IN \"%w\"", zTab, zTab); while( db_step(&q)==SQLITE_ROW ){ |
︙ | ︙ | |||
187 188 189 190 191 192 193 194 195 196 197 198 199 200 | ticket_rebuild_entry(db_column_text(&q, 0)); } db_finalize(&q); /* db_multi_exec("DROP TABLE \"%w_tickets\"", zTab); */ /* Mission accomplished */ db_end_transaction(0); return peid; } /* ** The TEMP table named zTab contains RIDs for a set of check-ins. ** ** Check to see if any check-in in zTab is a baseline manifest for some | > > > > > > > | 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 | ticket_rebuild_entry(db_column_text(&q, 0)); } db_finalize(&q); /* db_multi_exec("DROP TABLE \"%w_tickets\"", zTab); */ /* Mission accomplished */ db_end_transaction(0); if( purgeFlags & PURGE_PRINT_SUMMARY ){ fossil_print("%d artifacts purged\n", db_int(0, "SELECT count(*) FROM \"%w\";", zTab)); fossil_print("undoable using \"%s purge undo %d\".\n", g.nameOfExe, peid); } return peid; } /* ** The TEMP table named zTab contains RIDs for a set of check-ins. ** ** Check to see if any check-in in zTab is a baseline manifest for some |
︙ | ︙ | |||
218 219 220 221 222 223 224 | } } /* ** The TEMP table named zTab contains the RIDs for a set of check-in ** artifacts. Expand this set (by adding new entries to zTab) to include | | | 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 | } } /* ** The TEMP table named zTab contains the RIDs for a set of check-in ** artifacts. Expand this set (by adding new entries to zTab) to include ** all other artifacts that are used by the check-ins in ** the original list. ** ** If the bExclusive flag is true, then the set is only expanded by ** artifacts that are used exclusively by the check-ins in the set. ** When bExclusive is false, then all artifacts used by the check-ins ** are added even if those artifacts are also used by other check-ins ** not in the set. |
︙ | ︙ | |||
420 421 422 423 424 425 426 | blob_reset(&c2); } db_finalize(&q); if( iSrc>0 ) bag_remove(&busy, iSrc); } /* | | | > > > > > > > > > > > > > | | | < < < | > > | > > | | | > > > > > > > > > > > > > > | | > > > > > > > > > > > > > > > > > > > > > | | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | | | | | | | | | | | | | | | | | | | | | < < < < < < < < < < < < < < < < < < > | < < < < < < | < < | < < < > > | 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 | blob_reset(&c2); } db_finalize(&q); if( iSrc>0 ) bag_remove(&busy, iSrc); } /* ** COMMAND: purge* ** ** The purge command removes content from a repository and stores that content ** in a "graveyard". The graveyard exists so that content can be recovered ** using the "fossil purge undo" command. The "fossil purge obliterate" ** command empties the graveyard, making the content unrecoverable. ** ** ==== WARNING: This command can potentially destroy historical data and ==== ** ==== leave your repository in a goofy state. Know what you are doing! ==== ** ==== Make a backup of your repository before using this command! ==== ** ** ==== FURTHER WARNING: This command is a work-in-progress and may yet ==== ** ==== contain bugs. ==== ** ** fossil purge artifacts UUID... ?OPTIONS? ** ** Move arbitrary artifacts identified by the UUID list into the ** graveyard. ** ** fossil purge cat UUID... ** ** Write the content of one or more artifacts in the graveyard onto ** standard output. ** ** fossil purge checkins TAGS... ?OPTIONS? ** ** Move the check-ins or branches identified by TAGS and all of ** their descendants out of the repository and into the graveyard. ** If TAGS includes a branch name then it means all the check-ins ** on the most recent occurrence of that branch. ** ** fossil purge files NAME ... ?OPTIONS? ** ** Move all instances of files called NAME into the graveyard. ** NAME should be the name of the file relative to the root of the ** repository. If NAME is a directory, then all files within that ** directory are moved. ** ** fossil purge list|ls ?-l? ** ** Show the graveyard of prior purges. The -l option gives more ** detail in the output. ** ** fossil purge obliterate ID... ?--force? ** ** Remove one or more purge events from the graveyard. Once a purge ** event is obliterated, it can no longer be undone. The --force ** option suppresses the confirmation prompt. ** ** fossil purge tickets NAME ... ?OPTIONS? ** ** TBD... ** ** fossil purge undo ID ** ** Restore the content previously removed by purge ID. ** ** fossil purge wiki NAME ... ?OPTIONS? ** ** TBD... ** ** COMMON OPTIONS: ** ** --explain Make no changes, but show what would happen. ** --dry-run An alias for --explain ** ** SUMMARY: ** fossil purge artifacts UUID.. [OPTIONS] ** fossil purge cat UUID... ** fossil purge checkins TAGS... [OPTIONS] ** fossil purge files FILENAME... [OPTIONS] ** fossil purge list ** fossil purge obliterate ID... ** fossil purge tickets NAME... [OPTIONS] ** fossil purge undo ID ** fossil purge wiki NAME... [OPTIONS] */ void purge_cmd(void){ int purgeFlags = PURGE_MOVETO_GRAVEYARD | PURGE_PRINT_SUMMARY; const char *zSubcmd; int n; int i; Stmt q; if( g.argc<3 ) usage("SUBCOMMAND ?ARGS?"); zSubcmd = g.argv[2]; db_find_and_open_repository(0,0); n = (int)strlen(zSubcmd); if( find_option("explain",0,0)!=0 || find_option("dry-run",0,0)!=0 ){ purgeFlags |= PURGE_EXPLAIN_ONLY; } if( strncmp(zSubcmd, "artifacts", n)==0 ){ verify_all_options(); db_begin_transaction(); db_multi_exec("CREATE TEMP TABLE ok(rid INTEGER PRIMARY KEY)"); for(i=3; i<g.argc; i++){ int r = name_to_typed_rid(g.argv[i], ""); db_multi_exec("INSERT OR IGNORE INTO ok(rid) VALUES(%d);", r); } describe_artifacts_to_stdout("IN ok", 0); purge_artifact_list("ok", "", purgeFlags); db_end_transaction(0); }else if( strncmp(zSubcmd, "cat", n)==0 ){ int i, piid; Blob content; if( g.argc<4 ) usage("cat UUID..."); for(i=3; i<g.argc; i++){ piid = db_int(0, "SELECT piid FROM purgeitem WHERE uuid LIKE '%q%%'", g.argv[i]); if( piid==0 ) fossil_fatal("no such item: %s", g.argv[3]); purge_extract_item(piid, &content); blob_write_to_file(&content, "-"); blob_reset(&content); } }else if( strncmp(zSubcmd, "checkins", n)==0 ){ int vid; if( find_option("explain",0,0)!=0 || find_option("dry-run",0,0)!=0 ){ purgeFlags |= PURGE_EXPLAIN_ONLY; } verify_all_options(); db_begin_transaction(); if( g.argc<=3 ) usage("checkins TAGS... [OPTIONS]"); db_multi_exec("CREATE TEMP TABLE ok(rid INTEGER PRIMARY KEY)"); for(i=3; i<g.argc; i++){ int r = name_to_typed_rid(g.argv[i], "br"); compute_descendants(r, 1000000000); } vid = db_lget_int("checkout",0); if( db_exists("SELECT 1 FROM ok WHERE rid=%d",vid) ){ fossil_fatal("cannot purge the current checkout"); } find_checkin_associates("ok", 1); purge_artifact_list("ok", "", purgeFlags); db_end_transaction(0); }else if( strncmp(zSubcmd, "files", n)==0 ){ verify_all_options(); db_begin_transaction(); db_multi_exec("CREATE TEMP TABLE ok(rid INTEGER PRIMARY KEY)"); for(i=3; i<g.argc; i++){ db_multi_exec( "INSERT OR IGNORE INTO ok(rid) " " SELECT fid FROM mlink, filename" " WHERE mlink.fnid=filename.fnid" " AND (filename.name=%Q OR filename.name GLOB '%q/*')", g.argv[i], g.argv[i] ); } purge_artifact_list("ok", "", purgeFlags); db_end_transaction(0); }else if( strncmp(zSubcmd, "list", n)==0 || strcmp(zSubcmd,"ls")==0 ){ int showDetail = find_option("l","l",0)!=0; if( !db_table_exists("repository","purgeevent") ) return; db_prepare(&q, "SELECT peid, datetime(ctime,'unixepoch',toLocal())" " FROM purgeevent"); while( db_step(&q)==SQLITE_ROW ){ fossil_print("%4d on %s\n", db_column_int(&q,0), db_column_text(&q,1)); if( showDetail ){ purge_list_event_content(db_column_int(&q,0)); } } db_finalize(&q); }else if( strncmp(zSubcmd, "obliterate", n)==0 ){ int i; int bForce = find_option("force","f",0)!=0; if( g.argc<4 ) usage("obliterate ID..."); if( !bForce ){ Blob ans; char cReply; prompt_user( "Obliterating the graveyard will permanently delete information.\n" "Changes cannot be undone. Continue (y/N)? ", &ans); cReply = blob_str(&ans)[0]; if( cReply!='y' && cReply!='Y' ){ fossil_exit(1); } } db_begin_transaction(); for(i=3; i<g.argc; i++){ int peid = atoi(g.argv[i]); if( !db_exists("SELECT 1 FROM purgeevent WHERE peid=%d",peid) ){ fossil_fatal("no such purge event: %s", g.argv[i]); } db_multi_exec( "DELETE FROM purgeevent WHERE peid=%d;" "DELETE FROM purgeitem WHERE peid=%d;", peid, peid ); } db_end_transaction(0); }else if( strncmp(zSubcmd, "tickets", n)==0 ){ fossil_fatal("not yet implemented...."); }else if( strncmp(zSubcmd, "undo", n)==0 ){ int peid; if( g.argc!=4 ) usage("undo ID"); peid = atoi(g.argv[3]); if( (purgeFlags & PURGE_EXPLAIN_ONLY)==0 ){ db_begin_transaction(); db_multi_exec( "CREATE TEMP TABLE ix(" " piid INTEGER PRIMARY KEY," " srcid INTEGER" ");" "CREATE INDEX ixsrcid ON ix(srcid);" "INSERT INTO ix(piid,srcid) " " SELECT piid, coalesce(srcid,0) FROM purgeitem WHERE peid=%d;", peid ); db_multi_exec( "DELETE FROM shun" " WHERE uuid IN (SELECT uuid FROM purgeitem WHERE peid=%d);", peid ); manifest_crosslink_begin(); purge_item_resurrect(0, 0); manifest_crosslink_end(0); db_multi_exec("DELETE FROM purgeevent WHERE peid=%d", peid); db_multi_exec("DELETE FROM purgeitem WHERE peid=%d", peid); db_end_transaction(0); } }else if( strncmp(zSubcmd, "wiki", n)==0 ){ fossil_fatal("not yet implemented...."); }else{ fossil_fatal("unknown subcommand \"%s\".\n" "should be one of: cat, checkins, files, list, obliterate," " tickets, undo, wiki", zSubcmd); } } |
Changes to src/rebuild.c.
︙ | ︙ | |||
347 348 349 350 351 352 353 | for(;;){ zTable = db_text(0, "SELECT name FROM sqlite_master /*scan*/" " WHERE type='table'" " AND name NOT IN ('admin_log', 'blob','delta','rcvfrom','user'," "'config','shun','private','reportfmt'," "'concealed','accesslog','modreq'," | | | 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 | for(;;){ zTable = db_text(0, "SELECT name FROM sqlite_master /*scan*/" " WHERE type='table'" " AND name NOT IN ('admin_log', 'blob','delta','rcvfrom','user'," "'config','shun','private','reportfmt'," "'concealed','accesslog','modreq'," "'purgeevent','purgeitem','unversioned')" " AND name NOT GLOB 'sqlite_*'" " AND name NOT GLOB 'fx_*'" ); if( zTable==0 ) break; db_multi_exec("DROP TABLE %Q", zTable); free(zTable); } |
︙ | ︙ | |||
681 682 683 684 685 686 687 | } fossil_print("%-15s %6d\n", "Other:", g.parseCnt[CFTYPE_ANY] - subtotal); } } /* ** COMMAND: test-detach | | | 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 | } fossil_print("%-15s %6d\n", "Other:", g.parseCnt[CFTYPE_ANY] - subtotal); } } /* ** COMMAND: test-detach ** ** Usage: %fossil test-detach ?REPOSITORY? ** ** Change the project-code and make other changes in order to prevent ** the repository from ever again pushing or pulling to other ** repositories. Used to create a "test" repository for development ** testing by cloning a working project repository. */ |
︙ | ︙ | |||
794 795 796 797 798 799 800 | } db_finalize(&q); } } /* ** COMMAND: scrub* | | | 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 | } db_finalize(&q); } } /* ** COMMAND: scrub* ** ** Usage: %fossil scrub ?OPTIONS? ?REPOSITORY? ** ** The command removes sensitive information (such as passwords) from a ** repository so that the repository can be sent to an untrusted reader. ** ** By default, only passwords are removed. However, if the --verily option ** is added, then private branches, concealed email addresses, IP |
︙ | ︙ |
Changes to src/regexp.c.
︙ | ︙ | |||
203 204 205 206 207 208 209 | strncmp((const char*)zIn+in.i, (const char*)pRe->zInit, pRe->nInit)!=0) ){ in.i++; } if( in.i+pRe->nInit>in.mx ) return 0; } | | | 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 | strncmp((const char*)zIn+in.i, (const char*)pRe->zInit, pRe->nInit)!=0) ){ in.i++; } if( in.i+pRe->nInit>in.mx ) return 0; } if( pRe->nState<=count(aSpace)*2 ){ pToFree = 0; aStateSet[0].aState = aSpace; }else{ pToFree = fossil_malloc( sizeof(ReStateNumber)*2*pRe->nState ); if( pToFree==0 ) return -1; aStateSet[0].aState = pToFree; } |
︙ | ︙ |
Changes to src/report.c.
︙ | ︙ | |||
171 172 173 174 175 176 177 | const char *zArg2, const char *zArg3, const char *zArg4 ){ int rc = SQLITE_OK; if( *(char**)pError ){ /* We've already seen an error. No need to continue. */ | | > | | | 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 | const char *zArg2, const char *zArg3, const char *zArg4 ){ int rc = SQLITE_OK; if( *(char**)pError ){ /* We've already seen an error. No need to continue. */ return SQLITE_DENY; } switch( code ){ case SQLITE_SELECT: case SQLITE_RECURSIVE: case SQLITE_FUNCTION: { break; } case SQLITE_READ: { static const char *const azAllowed[] = { "ticket", "ticketchng", "blob", "filename", "mlink", "plink", "event", "tag", "tagxref", "unversioned", }; int i; if( fossil_strncmp(zArg1, "fx_", 3)==0 ){ break; } for(i=0; i<count(azAllowed); i++){ if( fossil_stricmp(zArg1, azAllowed[i])==0 ) break; } if( i>=count(azAllowed) ){ *(char**)pError = mprintf("access to table \"%s\" is restricted",zArg1); rc = SQLITE_DENY; }else if( !g.perm.RdAddr && strncmp(zArg2, "private_", 8)==0 ){ rc = SQLITE_IGNORE; } break; } |
︙ | ︙ | |||
447 448 449 450 451 452 453 | if( P("copy") ){ rn = 0; zTitle = mprintf("Copy Of %s", zTitle); zOwner = g.zLogin; } } if( zOwner==0 ) zOwner = g.zLogin; | | | | 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 | if( P("copy") ){ rn = 0; zTitle = mprintf("Copy Of %s", zTitle); zOwner = g.zLogin; } } if( zOwner==0 ) zOwner = g.zLogin; style_submenu_element("Cancel", "reportlist"); if( rn>0 ){ style_submenu_element("Delete", "rptedit?rn=%d&del1=1", rn); } style_header("%s", rn>0 ? "Edit Report Format":"Create New Report Format"); if( zErr ){ @ <blockquote class="reportError">%h(zErr)</blockquote> } @ <form action="rptedit" method="post"><div> @ <input type="hidden" name="rn" value="%d(rn)" /> |
︙ | ︙ | |||
697 698 699 700 701 702 703 | pState->wikiFlags = WIKI_NOBADLINKS; pState->zWikiStart = ""; pState->zWikiEnd = ""; if( P("plaintext") ){ pState->wikiFlags |= WIKI_LINKSONLY; pState->zWikiStart = "<pre class='verbatim'>"; pState->zWikiEnd = "</pre>"; | | < | | | 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 | pState->wikiFlags = WIKI_NOBADLINKS; pState->zWikiStart = ""; pState->zWikiEnd = ""; if( P("plaintext") ){ pState->wikiFlags |= WIKI_LINKSONLY; pState->zWikiStart = "<pre class='verbatim'>"; pState->zWikiEnd = "</pre>"; style_submenu_element("Formatted", "%R/rptview?rn=%d", pState->rn); }else{ style_submenu_element("Plaintext", "%R/rptview?rn=%d&plaintext", pState->rn); } }else{ pState->nCol++; } } } |
︙ | ︙ | |||
893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 | int rc = SQLITE_OK; /* Return code */ const char *zLeftover; /* Tail of unprocessed SQL */ sqlite3_stmt *pStmt = 0; /* The current SQL statement */ const char **azCols = 0; /* Names of result columns */ int nCol; /* Number of columns of output */ const char **azVals = 0; /* Text of all output columns */ int i; /* Loop counter */ pStmt = 0; rc = sqlite3_prepare_v2(db, zSql, -1, &pStmt, &zLeftover); assert( rc==SQLITE_OK || pStmt==0 ); if( rc!=SQLITE_OK ){ return rc; } if( !pStmt ){ /* this happens for a comment or white-space */ return SQLITE_OK; } if( !sqlite3_stmt_readonly(pStmt) ){ sqlite3_finalize(pStmt); return SQLITE_ERROR; } | > | > > > > > > | > > | > | 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 | int rc = SQLITE_OK; /* Return code */ const char *zLeftover; /* Tail of unprocessed SQL */ sqlite3_stmt *pStmt = 0; /* The current SQL statement */ const char **azCols = 0; /* Names of result columns */ int nCol; /* Number of columns of output */ const char **azVals = 0; /* Text of all output columns */ int i; /* Loop counter */ int nVar; /* Number of parameters */ pStmt = 0; rc = sqlite3_prepare_v2(db, zSql, -1, &pStmt, &zLeftover); assert( rc==SQLITE_OK || pStmt==0 ); if( rc!=SQLITE_OK ){ return rc; } if( !pStmt ){ /* this happens for a comment or white-space */ return SQLITE_OK; } if( !sqlite3_stmt_readonly(pStmt) ){ sqlite3_finalize(pStmt); return SQLITE_ERROR; } nVar = sqlite3_bind_parameter_count(pStmt); for(i=1; i<=nVar; i++){ const char *zVar = sqlite3_bind_parameter_name(pStmt, i); if( zVar==0 ) continue; if( zVar[0]!='$' && zVar[0]!='$' && zVar[0]!=':' ) continue; if( !fossil_islower(zVar[1]) ) continue; if( strcmp(zVar, "$login")==0 ){ sqlite3_bind_text(pStmt, i, g.zLogin, -1, SQLITE_TRANSIENT); }else{ sqlite3_bind_text(pStmt, i, P(zVar+1), -1, SQLITE_TRANSIENT); } } nCol = sqlite3_column_count(pStmt); azVals = fossil_malloc(2*nCol*sizeof(const char*) + 1); while( (rc = sqlite3_step(pStmt))==SQLITE_ROW ){ if( azCols==0 ){ azCols = &azVals[nCol]; for(i=0; i<nCol; i++){ azCols[i] = sqlite3_column_name(pStmt, i); |
︙ | ︙ | |||
1181 1182 1183 1184 1185 1186 1187 | zDir = !strcmp("ASC",zDir) ? "ASC" : "DESC"; zSql = mprintf("SELECT * FROM (%s) ORDER BY %d %s", zSql, nField, zDir); } } count = 0; if( !tabs ){ | | | < | | | < | 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 | zDir = !strcmp("ASC",zDir) ? "ASC" : "DESC"; zSql = mprintf("SELECT * FROM (%s) ORDER BY %d %s", zSql, nField, zDir); } } count = 0; if( !tabs ){ struct GenerateHTML sState = { 0, 0, 0, 0, 0, 0, 0, 0, 0 }; db_multi_exec("PRAGMA empty_result_callbacks=ON"); style_submenu_element("Raw", "rptview?tablist=1&%h", PD("QUERY_STRING","")); if( g.perm.Admin || (g.perm.TktFmt && g.zLogin && fossil_strcmp(g.zLogin,zOwner)==0) ){ style_submenu_element("Edit", "rptedit?rn=%d", rn); } if( g.perm.TktFmt ){ style_submenu_element("SQL", "rptsql?rn=%d",rn); } if( g.perm.NewTkt ){ style_submenu_element("New Ticket", "%s/tktnew", g.zTop); } style_header("%s", zTitle); output_color_key(zClrKey, 1, "border=\"0\" cellpadding=\"3\" cellspacing=\"0\" class=\"report\""); @ <table border="1" cellpadding="2" cellspacing="0" class="report" @ id="reportTable"> sState.rn = rn; |
︙ | ︙ |
Changes to src/rss.c.
︙ | ︙ | |||
214 215 216 217 218 219 220 | if( zFreeProjectName != 0 ){ free( zFreeProjectName ); } } /* | | | 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 | if( zFreeProjectName != 0 ){ free( zFreeProjectName ); } } /* ** COMMAND: rss* ** ** Usage: %fossil rss ?OPTIONS? ** ** The CLI variant of the /timeline.rss page, this produces an RSS ** feed of the timeline to stdout. Options: ** ** -type|y FLAG |
︙ | ︙ |
Changes to src/search.c.
︙ | ︙ | |||
11 12 13 14 15 16 17 | ** ** Author contact information: ** drh@hwaci.com ** http://www.hwaci.com/drh/ ** ******************************************************************************* ** | | | < | > | > > > > > > > > > | | | 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 | ** ** Author contact information: ** drh@hwaci.com ** http://www.hwaci.com/drh/ ** ******************************************************************************* ** ** This file contains code to implement a search functions ** against timeline comments, check-in content, wiki pages, and/or tickets. ** ** The search can be either a per-query "grep"-like search that scans ** the entire corpus. Or it can use the FTS4 or FTS5 search engine of ** SQLite. The choice is a administrator configuration option. ** ** The first option is referred to as "full-scan search". The second ** option is called "indexed search". ** ** The code in this file is ordered approximately as follows: ** ** (1) The full-scan search engine ** (2) The indexed search engine ** (3) Higher level interfaces that use either (1) or (b2) according ** to the current search configuration settings */ #include "config.h" #include "search.h" #include <assert.h> #if INTERFACE /* Maximum number of search terms for full-scan search */ #define SEARCH_MAX_TERM 8 /* ** A compiled search pattern used for full-scan search. */ struct Search { int nTerm; /* Number of search terms */ struct srchTerm { /* For each search term */ char *z; /* Text */ int n; /* length */ } a[SEARCH_MAX_TERM]; |
︙ | ︙ | |||
83 84 85 86 87 88 89 | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }; #define ISALNUM(x) (!isBoundary[(x)&0xff]) /* | | | | 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }; #define ISALNUM(x) (!isBoundary[(x)&0xff]) /* ** Destroy a full-scan search context. */ void search_end(Search *p){ if( p ){ fossil_free(p->zPattern); fossil_free(p->zMarkBegin); fossil_free(p->zMarkEnd); fossil_free(p->zMarkGap); if( p->iScore ) blob_reset(&p->snip); memset(p, 0, sizeof(*p)); if( p!=&gSearch ) fossil_free(p); } } /* ** Compile a full-scan search pattern */ static Search *search_init( const char *zPattern, /* The search pattern */ const char *zMarkBegin, /* Start of a match */ const char *zMarkEnd, /* End of a match */ const char *zMarkGap, /* A gap between two matches */ unsigned fSrchFlg /* Flags */ |
︙ | ︙ | |||
155 156 157 158 159 160 161 | blob_appendf(pSnip, "%#h", n, zTxt); }else{ blob_append(pSnip, zTxt, n); } } } | > | | 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 | blob_appendf(pSnip, "%#h", n, zTxt); }else{ blob_append(pSnip, zTxt, n); } } } /* This the core search engine for full-scan search. ** ** Compare a search pattern against one or more input strings which ** collectively comprise a document. Return a match score. Any ** postive value means there was a match. Zero means that one or ** more terms are missing. ** ** The score and a snippet are record for future use. ** |
︙ | ︙ | |||
316 317 318 319 320 321 322 323 324 325 326 327 328 329 | return score; } /* ** COMMAND: test-match ** ** Usage: %fossil test-match SEARCHSTRING FILE1 FILE2 ... */ void test_match_cmd(void){ Search *p; int i; Blob x; int score; char *zDoc; | > > > | 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 | return score; } /* ** COMMAND: test-match ** ** Usage: %fossil test-match SEARCHSTRING FILE1 FILE2 ... ** ** Run the full-scan search algorithm using SEARCHSTRING against ** the text of the files listed. Output matches and snippets. */ void test_match_cmd(void){ Search *p; int i; Blob x; int score; char *zDoc; |
︙ | ︙ | |||
349 350 351 352 353 354 355 | fossil_print("%.78c\n%s\n%.78c\n\n", '=', blob_str(&p->snip), '='); } } search_end(p); } /* | | | > > > > | 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 | fossil_print("%.78c\n%s\n%.78c\n\n", '=', blob_str(&p->snip), '='); } } search_end(p); } /* ** An SQL function to initialize the full-scan search pattern: ** ** search_init(PATTERN,BEGIN,END,GAP,FLAGS) ** ** All arguments are optional. PATTERN is the search pattern. If it ** is omitted, then the global search pattern is reset. BEGIN and END ** and GAP are the strings used to construct snippets. FLAGS is an ** integer bit pattern containing the various SRCH_CKIN, SRCH_DOC, ** SRCH_TKT, or SRCH_ALL bits to determine what is to be searched. */ static void search_init_sqlfunc( sqlite3_context *context, int argc, sqlite3_value **argv ){ const char *zPattern = 0; |
︙ | ︙ | |||
384 385 386 387 388 389 390 | if( zPattern && zPattern[0] ){ search_init(zPattern, zBegin, zEnd, zGap, flg | SRCHFLG_STATIC); }else{ search_end(&gSearch); } } | > | > | | | > > | | < > > > > > > | > > > > > | > > > > > | > | | > | < | | 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 | if( zPattern && zPattern[0] ){ search_init(zPattern, zBegin, zEnd, zGap, flg | SRCHFLG_STATIC); }else{ search_end(&gSearch); } } /* search_match(TEXT, TEXT, ....) ** ** Using the full-scan search engine created by the most recent call ** to search_init(), match the input the TEXT arguments. ** Remember the results global full-scan search object. ** Return non-zero on a match and zero on a miss. */ static void search_match_sqlfunc( sqlite3_context *context, int argc, sqlite3_value **argv ){ const char *azDoc[5]; int nDoc; int rc; for(nDoc=0; nDoc<count(azDoc) && nDoc<argc; nDoc++){ azDoc[nDoc] = (const char*)sqlite3_value_text(argv[nDoc]); if( azDoc[nDoc]==0 ) azDoc[nDoc] = ""; } rc = search_match(&gSearch, nDoc, azDoc); sqlite3_result_int(context, rc); } /* search_score() ** ** Return the match score for the last successful search_match call. */ static void search_score_sqlfunc( sqlite3_context *context, int argc, sqlite3_value **argv ){ sqlite3_result_int(context, gSearch.iScore); } /* search_snippet() ** ** Return a snippet for the last successful search_match() call. */ static void search_snippet_sqlfunc( sqlite3_context *context, int argc, sqlite3_value **argv ){ if( blob_size(&gSearch.snip)>0 ){ sqlite3_result_text(context, blob_str(&gSearch.snip), -1, fossil_free); blob_init(&gSearch.snip, 0, 0); } } /* stext(TYPE, RID, ARG) ** ** This is an SQLite function that computes the searchable text. ** It is a wrapper around the search_stext() routine. See the ** search_stext() routine for further detail. */ static void search_stext_sqlfunc( sqlite3_context *context, int argc, sqlite3_value **argv ){ const char *zType = (const char*)sqlite3_value_text(argv[0]); int rid = sqlite3_value_int(argv[1]); const char *zName = (const char*)sqlite3_value_text(argv[2]); sqlite3_result_text(context, search_stext_cached(zType[0],rid,zName,0), -1, SQLITE_TRANSIENT); } /* title(TYPE, RID, ARG) ** ** Return the title of the document to be search. */ static void search_title_sqlfunc( sqlite3_context *context, int argc, sqlite3_value **argv ){ const char *zType = (const char*)sqlite3_value_text(argv[0]); int rid = sqlite3_value_int(argv[1]); const char *zName = (const char*)sqlite3_value_text(argv[2]); int nHdr = 0; char *z = search_stext_cached(zType[0], rid, zName, &nHdr); if( nHdr || zType[0]!='d' ){ sqlite3_result_text(context, z, nHdr, SQLITE_TRANSIENT); }else{ sqlite3_result_value(context, argv[2]); } } /* body(TYPE, RID, ARG) ** ** Return the body of the document to be search. */ static void search_body_sqlfunc( sqlite3_context *context, int argc, sqlite3_value **argv ){ const char *zType = (const char*)sqlite3_value_text(argv[0]); int rid = sqlite3_value_int(argv[1]); const char *zName = (const char*)sqlite3_value_text(argv[2]); int nHdr = 0; char *z = search_stext_cached(zType[0], rid, zName, &nHdr); sqlite3_result_text(context, z+nHdr+1, -1, SQLITE_TRANSIENT); } /* urlencode(X) ** ** Encode a string for use as a query parameter in a URL. This is ** the equivalent of printf("%T",X). */ static void search_urlencode_sqlfunc( sqlite3_context *context, int argc, sqlite3_value **argv ){ char *z = mprintf("%T",sqlite3_value_text(argv[0])); sqlite3_result_text(context, z, -1, fossil_free); } /* ** Register the various SQL functions (defined above) needed to implement ** full-scan search. */ void search_sql_setup(sqlite3 *db){ static int once = 0; if( once++ ) return; sqlite3_create_function(db, "search_match", -1, SQLITE_UTF8, 0, search_match_sqlfunc, 0, 0); sqlite3_create_function(db, "search_score", 0, SQLITE_UTF8, 0, |
︙ | ︙ | |||
514 515 516 517 518 519 520 | search_urlencode_sqlfunc, 0, 0); } /* ** Testing the search function. ** ** COMMAND: search* | | | 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 | search_urlencode_sqlfunc, 0, 0); } /* ** Testing the search function. ** ** COMMAND: search* ** ** Usage: %fossil search [-all|-a] [-limit|-n #] [-width|-W #] pattern... ** ** Search for timeline entries matching all words provided on the ** command line. Whole-word matches scope more highly than partial ** matches. ** ** Outputs, by default, some top-N fraction of the results. The -all |
︙ | ︙ | |||
618 619 620 621 622 623 624 | { SRCH_TKT, "search-tkt" }, { SRCH_WIKI, "search-wiki" }, }; int i; if( g.perm.Read==0 ) srchFlags &= ~(SRCH_CKIN|SRCH_DOC); if( g.perm.RdTkt==0 ) srchFlags &= ~(SRCH_TKT); if( g.perm.RdWiki==0 ) srchFlags &= ~(SRCH_WIKI); | | | > > > > > > | | | 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 | { SRCH_TKT, "search-tkt" }, { SRCH_WIKI, "search-wiki" }, }; int i; if( g.perm.Read==0 ) srchFlags &= ~(SRCH_CKIN|SRCH_DOC); if( g.perm.RdTkt==0 ) srchFlags &= ~(SRCH_TKT); if( g.perm.RdWiki==0 ) srchFlags &= ~(SRCH_WIKI); for(i=0; i<count(aSetng); i++){ unsigned int m = aSetng[i].m; if( (srchFlags & m)==0 ) continue; if( ((knownGood|knownBad) & m)!=0 ) continue; if( db_get_boolean(aSetng[i].zKey,0) ){ knownGood |= m; }else{ knownBad |= m; } } return srchFlags & ~knownBad; } /* ** When this routine is called, there already exists a table ** ** x(label,url,score,id,snip). ** ** label: The "name" of the document containing the match ** url: A URL for the document ** score: How well the document matched ** id: The document id. Format: xNNNNN, x: type, N: number ** snip: A snippet for the match ** ** And the srchFlags parameter has been validated. This routine ** fills the X table with search results using a full-scan search. ** ** The companion indexed search routine is search_indexed(). */ static void search_fullscan( const char *zPattern, /* The query pattern */ unsigned int srchFlags /* What to search over */ ){ search_init(zPattern, "<mark>", "</mark>", " ... ", SRCHFLG_STATIC|SRCHFLG_HTML); |
︙ | ︙ | |||
803 804 805 806 807 808 809 | sqlite3_result_double(context, r); #endif } /* ** When this routine is called, there already exists a table ** | | > > > > > > | | | 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 | sqlite3_result_double(context, r); #endif } /* ** When this routine is called, there already exists a table ** ** x(label,url,score,id,snip). ** ** label: The "name" of the document containing the match ** url: A URL for the document ** score: How well the document matched ** id: The document id. Format: xNNNNN, x: type, N: number ** snip: A snippet for the match ** ** And the srchFlags parameter has been validated. This routine ** fills the X table with search results using FTS indexed search. ** ** The companion full-scan search routine is search_fullscan(). */ static void search_indexed( const char *zPattern, /* The query pattern */ unsigned int srchFlags /* What to search over */ ){ Blob sql; if( srchFlags==0 ) return; |
︙ | ︙ | |||
841 842 843 844 845 846 847 | static const struct { unsigned m; char c; } aMask[] = { { SRCH_CKIN, 'c' }, { SRCH_DOC, 'd' }, { SRCH_TKT, 't' }, { SRCH_WIKI, 'w' }, }; int i; | | | 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 | static const struct { unsigned m; char c; } aMask[] = { { SRCH_CKIN, 'c' }, { SRCH_DOC, 'd' }, { SRCH_TKT, 't' }, { SRCH_WIKI, 'w' }, }; int i; for(i=0; i<count(aMask); i++){ if( srchFlags & aMask[i].m ){ blob_appendf(&sql, "%sftsdocs.type='%c'", zSep, aMask[i].c); zSep = " OR "; } } blob_append(&sql,")",1); } |
︙ | ︙ | |||
908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 | } /* ** This routine generates web-page output for a search operation. ** Other web-pages can invoke this routine to add search results ** in the middle of the page. ** ** Return the number of rows. */ int search_run_and_output( const char *zPattern, /* The query pattern */ unsigned int srchFlags, /* What to search over */ int fDebug /* Extra debugging output */ ){ Stmt q; int nRow = 0; srchFlags = search_restrict(srchFlags); if( srchFlags==0 ) return 0; search_sql_setup(g.db); add_content_sql_commands(g.db); db_multi_exec( "CREATE TEMP TABLE x(label,url,score,id,date,snip);" ); if( !search_index_exists() ){ | > > > > | | | | | | 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 | } /* ** This routine generates web-page output for a search operation. ** Other web-pages can invoke this routine to add search results ** in the middle of the page. ** ** This routine works for both full-scan and indexed search. The ** appropriate low-level search routine is called according to the ** current configuration. ** ** Return the number of rows. */ int search_run_and_output( const char *zPattern, /* The query pattern */ unsigned int srchFlags, /* What to search over */ int fDebug /* Extra debugging output */ ){ Stmt q; int nRow = 0; srchFlags = search_restrict(srchFlags); if( srchFlags==0 ) return 0; search_sql_setup(g.db); add_content_sql_commands(g.db); db_multi_exec( "CREATE TEMP TABLE x(label,url,score,id,date,snip);" ); if( !search_index_exists() ){ search_fullscan(zPattern, srchFlags); /* Full-scan search */ }else{ search_update_index(srchFlags); /* Update the index, if necessary */ search_indexed(zPattern, srchFlags); /* Indexed search */ } db_prepare(&q, "SELECT url, snip, label, score, id" " FROM x" " ORDER BY score DESC, date DESC;"); while( db_step(&q)==SQLITE_ROW ){ const char *zUrl = db_column_text(&q, 0); const char *zSnippet = db_column_text(&q, 1); const char *zLabel = db_column_text(&q, 2); if( nRow==0 ){ @ <ol> } nRow++; @ <li><p><a href='%R%s(zUrl)'>%h(zLabel)</a> if( fDebug ){ @ (%e(db_column_double(&q,3)), %s(db_column_text(&q,4)) } @ <br /><span class='snippet'>%z(cleanSnippet(zSnippet))</span></li> } db_finalize(&q); if( nRow ){ @ </ol> } return nRow; } |
︙ | ︙ | |||
996 997 998 999 1000 1001 1002 | zDisable2 = " disabled"; zPattern = ""; }else{ zDisable1 = " autofocus"; zDisable2 = ""; zPattern = PD("s",""); } | | | | 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 | zDisable2 = " disabled"; zPattern = ""; }else{ zDisable1 = " autofocus"; zDisable2 = ""; zPattern = PD("s",""); } @ <form method='GET' action='%R/%T(g.zPath)'> if( zClass ){ @ <div class='searchForm searchForm%s(zClass)'> }else{ @ <div class='searchForm'> } @ <input type="text" name="s" size="40" value="%h(zPattern)"%s(zDisable1)> if( useYparam && (srchFlags & (srchFlags-1))!=0 && useYparam ){ static const struct { char *z; char *zNm; unsigned m; } aY[] = { { "all", "All", SRCH_ALL }, { "c", "Check-ins", SRCH_CKIN }, { "d", "Docs", SRCH_DOC }, { "t", "Tickets", SRCH_TKT }, { "w", "Wiki", SRCH_WIKI }, }; const char *zY = PD("y","all"); unsigned newFlags = srchFlags; int i; @ <select size='1' name='y'> for(i=0; i<count(aY); i++){ if( (aY[i].m & srchFlags)==0 ) continue; cgi_printf("<option value='%s'", aY[i].z); if( fossil_strcmp(zY,aY[i].z)==0 ){ newFlags &= aY[i].m; cgi_printf(" selected"); } cgi_printf(">%s</option>\n", aY[i].zNm); |
︙ | ︙ | |||
1171 1172 1173 1174 1175 1176 1177 | ** w Wiki page ** c Check-in comment ** t Ticket text ** ** rid The RID of an artifact that defines the object ** being searched. ** | | > > > | 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 | ** w Wiki page ** c Check-in comment ** t Ticket text ** ** rid The RID of an artifact that defines the object ** being searched. ** ** zName Name of the object being searched. This is used ** only to help figure out the mimetype (text/plain, ** test/html, test/x-fossil-wiki, or text/x-markdown) ** so that the code can know how to simplify the text. */ void search_stext( char cType, /* Type of document */ int rid, /* BLOB.RID or TAG.TAGID value for document */ const char *zName, /* Auxiliary information */ Blob *pOut /* OUT: Initialize to the search text */ ){ |
︙ | ︙ | |||
1274 1275 1276 1277 1278 1279 1280 | ** for the same document return the same pointer. The returned pointer ** is valid until the next invocation of this routine. Call this routine ** with an eType of 0 to clear the cache. */ char *search_stext_cached( char cType, /* Type of document */ int rid, /* BLOB.RID or TAG.TAGID value for document */ | | | 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 | ** for the same document return the same pointer. The returned pointer ** is valid until the next invocation of this routine. Call this routine ** with an eType of 0 to clear the cache. */ char *search_stext_cached( char cType, /* Type of document */ int rid, /* BLOB.RID or TAG.TAGID value for document */ const char *zName, /* Auxiliary information, for mimetype */ int *pnTitle /* OUT: length of title in bytes excluding \n */ ){ static struct { Blob stext; /* Cached search text */ char cType; /* The type */ int rid; /* The RID */ int nTitle; /* Number of bytes in title */ |
︙ | ︙ | |||
1306 1307 1308 1309 1310 1311 1312 | if( pnTitle ) *pnTitle = cache.nTitle; return blob_str(&cache.stext); } /* ** COMMAND: test-search-stext ** | | > > > > > | 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 | if( pnTitle ) *pnTitle = cache.nTitle; return blob_str(&cache.stext); } /* ** COMMAND: test-search-stext ** ** Usage: fossil test-search-stext TYPE RID NAME ** ** Compute the search text for document TYPE-RID whose name is NAME. ** The TYPE is one of "c", "d", "t", or "w". The RID is the document ** ID. The NAME is used to figure out a mimetype to use for formatting ** the raw document text. */ void test_search_stext(void){ Blob out; db_find_and_open_repository(0,0); if( g.argc!=5 ) usage("TYPE RID NAME"); search_stext(g.argv[2][0], atoi(g.argv[3]), g.argv[4], &out); fossil_print("%s\n",blob_str(&out)); |
︙ | ︙ | |||
1341 1342 1343 1344 1345 1346 1347 | blob_reset(&out); } /* The schema for the full-text index */ static const char zFtsSchema[] = @ -- One entry for each possible search result | | | | | | | | | < | < < | | 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 | blob_reset(&out); } /* The schema for the full-text index */ static const char zFtsSchema[] = @ -- One entry for each possible search result @ CREATE TABLE IF NOT EXISTS repository.ftsdocs( @ rowid INTEGER PRIMARY KEY, -- Maps to the ftsidx.docid @ type CHAR(1), -- Type of document @ rid INTEGER, -- BLOB.RID or TAG.TAGID for the document @ name TEXT, -- Additional document description @ idxed BOOLEAN, -- True if currently in the index @ label TEXT, -- Label to print on search results @ url TEXT, -- URL to access this document @ mtime DATE, -- Date when document created @ bx TEXT, -- Temporary "body" content cache @ UNIQUE(type,rid) @ ); @ CREATE INDEX repository.ftsdocIdxed ON ftsdocs(type,rid,name) WHERE idxed==0; @ CREATE INDEX repository.ftsdocName ON ftsdocs(name) WHERE type='w'; @ CREATE VIEW IF NOT EXISTS repository.ftscontent AS @ SELECT rowid, type, rid, name, idxed, label, url, mtime, @ title(type,rid,name) AS 'title', body(type,rid,name) AS 'body' @ FROM ftsdocs; @ CREATE VIRTUAL TABLE IF NOT EXISTS repository.ftsidx @ USING fts4(content="ftscontent", title, body%s); ; static const char zFtsDrop[] = @ DROP TABLE IF EXISTS repository.ftsidx; @ DROP VIEW IF EXISTS repository.ftscontent; @ DROP TABLE IF EXISTS repository.ftsdocs; ; /* ** Create or drop the tables associated with a full-text index. */ static int searchIdxExists = -1; void search_create_index(void){ int useStemmer = db_get_boolean("search-stemmer",0); const char *zExtra = useStemmer ? ",tokenize=porter" : ""; search_sql_setup(g.db); db_multi_exec(zFtsSchema/*works-like:"%s"*/, zExtra/*safe-for-%s*/); searchIdxExists = 1; } void search_drop_index(void){ db_multi_exec(zFtsDrop/*works-like:""*/); searchIdxExists = 0; } /* ** Return true if the full-text search index exists */ int search_index_exists(void){ |
︙ | ︙ | |||
1472 1473 1474 1475 1476 1477 1478 | ** check-ins, then update all 'd' entries in FTSDOCS that have ** changed. */ static void search_update_doc_index(void){ const char *zDocBr = db_get("doc-branch","trunk"); int ckid = zDocBr ? symbolic_name_to_rid(zDocBr,"ci") : 0; double rTime; | < < | 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 | ** check-ins, then update all 'd' entries in FTSDOCS that have ** changed. */ static void search_update_doc_index(void){ const char *zDocBr = db_get("doc-branch","trunk"); int ckid = zDocBr ? symbolic_name_to_rid(zDocBr,"ci") : 0; double rTime; if( ckid==0 ) return; if( !db_exists("SELECT 1 FROM ftsdocs WHERE type='c' AND rid=%d" " AND NOT idxed", ckid) ) return; /* If we get this far, it means that changes to 'd' entries are ** required. */ rTime = db_double(0.0, "SELECT mtime FROM event WHERE objid=%d", ckid); db_multi_exec( "CREATE TEMP TABLE current_docs(rid INTEGER PRIMARY KEY, name);" "CREATE VIRTUAL TABLE IF NOT EXISTS temp.foci USING files_of_checkin;" "INSERT OR IGNORE INTO current_docs(rid, name)" " SELECT blob.rid, foci.filename FROM foci, blob" " WHERE foci.checkinID=%d AND blob.uuid=foci.uuid" " AND %z", |
︙ | ︙ | |||
1504 1505 1506 1507 1508 1509 1510 | " AND rid NOT IN (SELECT rid FROM current_docs)" ); db_multi_exec( "INSERT OR IGNORE INTO ftsdocs(type,rid,name,idxed,label,bx,url,mtime)" " SELECT 'd', rid, name, 0," " title('d',rid,name)," " body('d',rid,name)," | | | | 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 | " AND rid NOT IN (SELECT rid FROM current_docs)" ); db_multi_exec( "INSERT OR IGNORE INTO ftsdocs(type,rid,name,idxed,label,bx,url,mtime)" " SELECT 'd', rid, name, 0," " title('d',rid,name)," " body('d',rid,name)," " printf('/doc/%T/%%s',urlencode(name))," " %.17g" " FROM current_docs", zDocBr, rTime ); db_multi_exec( "INSERT INTO ftsidx(docid,title,body)" " SELECT rowid, label, bx FROM ftsdocs WHERE type='d' AND NOT idxed" ); db_multi_exec( "UPDATE ftsdocs SET" |
︙ | ︙ | |||
1532 1533 1534 1535 1536 1537 1538 | static void search_update_checkin_index(void){ db_multi_exec( "INSERT INTO ftsidx(docid,title,body)" " SELECT rowid, '', body('c',rid,NULL) FROM ftsdocs" " WHERE type='c' AND NOT idxed;" ); db_multi_exec( | > | < | > | | | < | | > > | < | | | | | > | < > | | | | | | | < | 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 | static void search_update_checkin_index(void){ db_multi_exec( "INSERT INTO ftsidx(docid,title,body)" " SELECT rowid, '', body('c',rid,NULL) FROM ftsdocs" " WHERE type='c' AND NOT idxed;" ); db_multi_exec( "UPDATE ftsdocs SET idxed=1, name=NULL," " (label,url,mtime) = " " (SELECT printf('Check-in [%%.16s] on %%s',blob.uuid," " datetime(event.mtime))," " printf('/timeline?y=ci&c=%%.20s',blob.uuid)," " event.mtime" " FROM event, blob" " WHERE event.objid=ftsdocs.rid" " AND blob.rid=ftsdocs.rid)" "WHERE ftsdocs.type='c' AND NOT ftsdocs.idxed" ); } /* ** Deal with all of the unindexed 't' terms in FTSDOCS */ static void search_update_ticket_index(void){ db_multi_exec( "INSERT INTO ftsidx(docid,title,body)" " SELECT rowid, title('t',rid,NULL), body('t',rid,NULL) FROM ftsdocs" " WHERE type='t' AND NOT idxed;" ); if( db_changes()==0 ) return; db_multi_exec( "UPDATE ftsdocs SET idxed=1, name=NULL," " (label,url,mtime) =" " (SELECT printf('Ticket: %%s (%%s)',title('t',tkt_id,null)," " datetime(tkt_mtime))," " printf('/tktview/%%.20s',tkt_uuid)," " tkt_mtime" " FROM ticket" " WHERE tkt_id=ftsdocs.rid)" "WHERE ftsdocs.type='t' AND NOT ftsdocs.idxed" ); } /* ** Deal with all of the unindexed 'w' terms in FTSDOCS */ static void search_update_wiki_index(void){ db_multi_exec( "INSERT INTO ftsidx(docid,title,body)" " SELECT rowid, title('w',rid,NULL),body('w',rid,NULL) FROM ftsdocs" " WHERE type='w' AND NOT idxed;" ); if( db_changes()==0 ) return; db_multi_exec( "UPDATE ftsdocs SET idxed=1," " (name,label,url,mtime) = " " (SELECT ftsdocs.name," " 'Wiki: '||ftsdocs.name," " '/wiki?name='||urlencode(ftsdocs.name)," " tagxref.mtime" " FROM tagxref WHERE tagxref.rid=ftsdocs.rid)" " WHERE ftsdocs.type='w' AND NOT ftsdocs.idxed" ); } /* ** Deal with all of the unindexed entries in the FTSDOCS table - that ** is to say, all the entries with FTSDOCS.IDXED=0. Add them to the ** index. |
︙ | ︙ | |||
1646 1647 1648 1649 1650 1651 1652 | ** ** stemmer (on|off) Turn the Porter stemmer on or off for indexed ** search. (Unindexed search is never stemmed.) ** ** The current search settings are displayed after any changes are applied. ** Run this command with no arguments to simply see the settings. */ | | | 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 | ** ** stemmer (on|off) Turn the Porter stemmer on or off for indexed ** search. (Unindexed search is never stemmed.) ** ** The current search settings are displayed after any changes are applied. ** Run this command with no arguments to simply see the settings. */ void fts_config_cmd(void){ static const struct { int iCmd; const char *z; } aCmd[] = { { 1, "reindex" }, { 2, "index" }, { 3, "disable" }, { 4, "enable" }, { 5, "stemmer" }, }; |
︙ | ︙ | |||
1668 1669 1670 1671 1672 1673 1674 | int i, j, n; int iCmd = 0; int iAction = 0; db_find_and_open_repository(0, 0); if( g.argc>2 ){ zSubCmd = g.argv[2]; n = (int)strlen(zSubCmd); | | | | > > > | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 | int i, j, n; int iCmd = 0; int iAction = 0; db_find_and_open_repository(0, 0); if( g.argc>2 ){ zSubCmd = g.argv[2]; n = (int)strlen(zSubCmd); for(i=0; i<count(aCmd); i++){ if( fossil_strncmp(aCmd[i].z, zSubCmd, n)==0 ) break; } if( i>=count(aCmd) ){ Blob all; blob_init(&all,0,0); for(i=0; i<count(aCmd); i++) blob_appendf(&all, " %s", aCmd[i].z); fossil_fatal("unknown \"%s\" - should be on of:%s", zSubCmd, blob_str(&all)); return; } iCmd = aCmd[i].iCmd; } g.perm.Read = 1; g.perm.RdTkt = 1; g.perm.RdWiki = 1; if( iCmd==1 ){ if( search_index_exists() ) iAction = 2; } if( iCmd==2 ){ if( g.argc<3 ) usage("index (on|off)"); iAction = 1 + is_truth(g.argv[3]); } db_begin_transaction(); /* Adjust search settings */ if( iCmd==3 || iCmd==4 ){ const char *zCtrl; if( g.argc<4 ) usage(mprintf("%s STRING",zSubCmd)); zCtrl = g.argv[3]; for(j=0; j<count(aSetng); j++){ if( strchr(zCtrl, aSetng[j].zSw[0])!=0 ){ db_set_int(aSetng[j].zSetting, iCmd-3, 0); } } } if( iCmd==5 ){ if( g.argc<4 ) usage("porter ON/OFF"); db_set_int("search-stemmer", is_truth(g.argv[3]), 0); } /* destroy or rebuild the index, if requested */ if( iAction>=1 ){ search_drop_index(); } if( iAction>=2 ){ search_rebuild_index(); } /* Always show the status before ending */ for(i=0; i<count(aSetng); i++){ fossil_print("%-16s %s\n", aSetng[i].zName, db_get_boolean(aSetng[i].zSetting,0) ? "on" : "off"); } fossil_print("%-16s %s\n", "Porter stemmer:", db_get_boolean("search-stemmer",0) ? "on" : "off"); if( search_index_exists() ){ fossil_print("%-16s enabled\n", "full-text index:"); fossil_print("%-16s %d\n", "documents:", db_int(0, "SELECT count(*) FROM ftsdocs")); }else{ fossil_print("%-16s disabled\n", "full-text index:"); } db_end_transaction(0); } /* ** WEBPAGE: test-ftsdocs ** ** Show a table of all documents currently in the search index. */ void search_data_page(void){ Stmt q; const char *zId = P("id"); const char *zType = P("y"); const char *zIdxed = P("ixed"); int id; int cnt = 0; login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); return; } if( !search_index_exists() ){ @ <p>Indexed search is disabled style_footer(); return; } if( zId!=0 && (id = atoi(zId))>0 ){ /* Show information about a single ftsdocs entry */ style_header("Information about ftsdoc entry %d", id); db_prepare(&q, "SELECT type||rid, name, idxed, label, url, datetime(mtime)" " FROM ftsdocs WHERE rowid=%d", id ); if( db_step(&q)==SQLITE_ROW ){ const char *zUrl = db_column_text(&q,4); @ <table border=0> @ <tr><td align='right'>rowid:<td> <td>%d(id) @ <tr><td align='right'>id:<td><td>%s(db_column_text(&q,0)) @ <tr><td align='right'>name:<td><td>%h(db_column_text(&q,1)) @ <tr><td align='right'>idxed:<td><td>%d(db_column_int(&q,2)) @ <tr><td align='right'>label:<td><td>%h(db_column_text(&q,3)) @ <tr><td align='right'>url:<td><td> @ <a href='%R%s(zUrl)'>%h(zUrl)</a> @ <tr><td align='right'>mtime:<td><td>%s(db_column_text(&q,5)) @ </table> } db_finalize(&q); style_footer(); return; } if( zType!=0 && zType[0]!=0 && zType[1]==0 && zIdxed!=0 && (zIdxed[0]=='1' || zIdxed[0]=='0') && zIdxed[1]==0 ){ int ixed = zIdxed[0]=='1'; style_header("List of '%c' documents that are%s indexed", zType[0], ixed ? "" : " not"); db_prepare(&q, "SELECT rowid, type||rid ||' '|| coalesce(label,'')" " FROM ftsdocs WHERE type='%c' AND %s idxed", zType[0], ixed ? "" : "NOT" ); @ <ul> while( db_step(&q)==SQLITE_ROW ){ @ <li> <a href='test-ftsdocs?id=%d(db_column_int(&q,0))'> @ %h(db_column_text(&q,1))</a> } @ </ul> db_finalize(&q); style_footer(); return; } style_header("Summary of ftsdocs"); db_prepare(&q, "SELECT type, idxed, count(*) FROM ftsdocs" " GROUP BY 1, 2 ORDER BY 3 DESC" ); @ <table border=1 cellpadding=3 cellspacing=0> @ <thead> @ <tr><th>Type<th>Indexed?<th>Count<th>Link @ </thead> @ <tbody> while( db_step(&q)==SQLITE_ROW ){ const char *zType = db_column_text(&q,0); int idxed = db_column_int(&q,1); int n = db_column_int(&q,2); @ <tr><td>%h(zType)<td>%d(idxed) @ <td>%d(n) @ <td><a href='test-ftsdocs?y=%s(zType)&ixed=%d(idxed)'>listing</a> @ </tr> cnt += n; } @ </tbody><tfooter> @ <tr><th>Total<th><th>%d(cnt)<th> @ </tfooter> @ </table> style_footer(); } |
Changes to src/setup.c.
︙ | ︙ | |||
17 18 19 20 21 22 23 | ** ** Implementation of the Setup page */ #include "config.h" #include <assert.h> #include "setup.h" | < < < < < < < < < < < < | 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | ** ** Implementation of the Setup page */ #include "config.h" #include <assert.h> #include "setup.h" /* ** Output a single entry for a menu generated using an HTML table. ** If zLink is not NULL or an empty string, then it is the page that ** the menu entry will hyperlink to. If zLink is NULL or "", then ** the menu entry has no hyperlink - it is disabled. */ void setup_menu_entry( |
︙ | ︙ | |||
113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 | setup_menu_entry("Skins", "setup_skin", "Select and/or modify the web interface \"skins\""); setup_menu_entry("Moderation", "setup_modreq", "Enable/Disable requiring moderator approval of Wiki and/or Ticket" " changes and attachments."); setup_menu_entry("Ad-Unit", "setup_adunit", "Edit HTML text for an ad unit inserted after the menu bar"); setup_menu_entry("Logo", "setup_logo", "Change the logo and background images for the server"); setup_menu_entry("Shunned", "shun", "Show artifacts that are shunned by this repository"); setup_menu_entry("Artifact Receipts Log", "rcvfromlist", "A record of received artifacts and their sources"); setup_menu_entry("User Log", "access_log", "A record of login attempts"); setup_menu_entry("Administrative Log", "admin_log", "View the admin_log entries"); setup_menu_entry("Stats", "stat", "Repository Status Reports"); setup_menu_entry("Sitemap", "sitemap", "Links to miscellaneous pages"); setup_menu_entry("SQL", "admin_sql", "Enter raw SQL commands"); setup_menu_entry("TH1", "admin_th1", | > > > > | 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 | setup_menu_entry("Skins", "setup_skin", "Select and/or modify the web interface \"skins\""); setup_menu_entry("Moderation", "setup_modreq", "Enable/Disable requiring moderator approval of Wiki and/or Ticket" " changes and attachments."); setup_menu_entry("Ad-Unit", "setup_adunit", "Edit HTML text for an ad unit inserted after the menu bar"); setup_menu_entry("Web-Cache", "cachestat", "View the status of the expensive-page cache"); setup_menu_entry("Logo", "setup_logo", "Change the logo and background images for the server"); setup_menu_entry("Shunned", "shun", "Show artifacts that are shunned by this repository"); setup_menu_entry("Artifact Receipts Log", "rcvfromlist", "A record of received artifacts and their sources"); setup_menu_entry("User Log", "access_log", "A record of login attempts"); setup_menu_entry("Administrative Log", "admin_log", "View the admin_log entries"); setup_menu_entry("Unversioned Files", "uvlist?byage=1", "Show all unversioned files held"); setup_menu_entry("Stats", "stat", "Repository Status Reports"); setup_menu_entry("Sitemap", "sitemap", "Links to miscellaneous pages"); setup_menu_entry("SQL", "admin_sql", "Enter raw SQL commands"); setup_menu_entry("TH1", "admin_th1", |
︙ | ︙ | |||
151 152 153 154 155 156 157 | login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); return; } | | | | > > > | | | 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 | login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); return; } style_submenu_element("Add", "setup_uedit"); style_submenu_element("Log", "access_log"); style_submenu_element("Help", "setup_ulist_notes"); style_header("User List"); @ <table border=1 cellpadding=2 cellspacing=0 class='userTable'> @ <thead><tr> @ <th>UID <th>Category @ <th>Capabilities (<a href='%R/setup_ucap_list'>key</a>) @ <th>Info <th>Last Change</tr></thead> @ <tbody> db_prepare(&s, "SELECT uid, login, cap, date(mtime,'unixepoch')" " FROM user" " WHERE login IN ('anonymous','nobody','developer','reader')" " ORDER BY login" ); while( db_step(&s)==SQLITE_ROW ){ int uid = db_column_int(&s, 0); const char *zLogin = db_column_text(&s, 1); const char *zCap = db_column_text(&s, 2); const char *zDate = db_column_text(&s, 4); @ <tr> @ <td><a href='setup_uedit?id=%d(uid)'>%d(uid)</a> @ <td><a href='setup_uedit?id=%d(uid)'>%h(zLogin)</a> @ <td>%h(zCap) if( fossil_strcmp(zLogin,"anonymous")==0 ){ @ <td>All logged-in users }else if( fossil_strcmp(zLogin,"developer")==0 ){ @ <td>Users with '<b>v</b>' capability }else if( fossil_strcmp(zLogin,"nobody")==0 ){ @ <td>All users without login }else if( fossil_strcmp(zLogin,"reader")==0 ){ |
︙ | ︙ | |||
232 233 234 235 236 237 238 | @ </tbody></table> db_finalize(&s); output_table_sorting_javascript("userlist","nktxTT",2); style_footer(); } /* | < < < | < | < < < < | 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 | @ </tbody></table> db_finalize(&s); output_table_sorting_javascript("userlist","nktxTT",2); style_footer(); } /* ** Render the user-capability table */ static void setup_usercap_table(void){ @ <table> @ <tr><th valign="top">a</th> @ <td><i>Admin:</i> Create and delete users</td></tr> @ <tr><th valign="top">b</th> @ <td><i>Attach:</i> Add attachments to wiki or tickets</td></tr> @ <tr><th valign="top">c</th> @ <td><i>Append-Tkt:</i> Append to tickets</td></tr> |
︙ | ︙ | |||
295 296 297 298 299 300 301 302 303 304 | @ <tr><th valign="top">v</th> @ <td><i>Developer:</i> Inherit privileges of @ user <tt>developer</tt></td></tr> @ <tr><th valign="top">w</th> @ <td><i>Write-Tkt:</i> Edit tickets</td></tr> @ <tr><th valign="top">x</th> @ <td><i>Private:</i> Push and/or pull private branches</td></tr> @ <tr><th valign="top">z</th> @ <td><i>Zip download:</i> Download a ZIP archive or tarball</td></tr> @ </table> | > > < > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 | @ <tr><th valign="top">v</th> @ <td><i>Developer:</i> Inherit privileges of @ user <tt>developer</tt></td></tr> @ <tr><th valign="top">w</th> @ <td><i>Write-Tkt:</i> Edit tickets</td></tr> @ <tr><th valign="top">x</th> @ <td><i>Private:</i> Push and/or pull private branches</td></tr> @ <tr><th valign="top">y</th> @ <td><i>Write-Unver:</i> Push unversioned files</td></tr> @ <tr><th valign="top">z</th> @ <td><i>Zip download:</i> Download a ZIP archive or tarball</td></tr> @ </table> } /* ** WEBPAGE: setup_ulist_notes ** ** A documentation page showing notes about user configuration. This information ** used to be a side-bar on the user list page, but has been factored out for ** improved presentation. */ void setup_ulist_notes(void){ style_header("User Configuration Notes"); @ <h1>User Configuration Notes:</h1> @ <ol> @ <li><p> @ Every user, logged in or not, inherits the privileges of @ <span class="usertype">nobody</span>. @ </p></li> @ @ <li><p> @ Any human can login as <span class="usertype">anonymous</span> since the @ password is clearly displayed on the login page for them to type. The @ purpose of requiring anonymous to log in is to prevent access by spiders. @ Every logged-in user inherits the combined privileges of @ <span class="usertype">anonymous</span> and @ <span class="usertype">nobody</span>. @ </p></li> @ @ <li><p> @ Users with privilege <span class="capability">u</span> inherit the combined @ privileges of <span class="usertype">reader</span>, @ <span class="usertype">anonymous</span>, and @ <span class="usertype">nobody</span>. @ </p></li> @ @ <li><p> @ Users with privilege <span class="capability">v</span> inherit the combined @ privileges of <span class="usertype">developer</span>, @ <span class="usertype">anonymous</span>, and @ <span class="usertype">nobody</span>. @ </p></li> @ @ <li><p>The permission flags are as follows:</p> setup_usercap_table(); @ </li> @ </ol> style_footer(); } /* ** WEBPAGE: setup_ucap_list ** ** A documentation page showing the meaning of the various user capabilities ** code letters. */ void setup_ucap_list(void){ style_header("User Capability Codes"); setup_usercap_table(); style_footer(); } /* ** Return true if zPw is a valid password string. A valid ** password string is: ** ** (1) A zero-length string, or ** (2) a string that contains a character other than '*'. |
︙ | ︙ | |||
490 491 492 493 494 495 496 | for(i=0; zCap[i]; i++){ char c = zCap[i]; if( c>='a' && c<='z' ) oa[c&0x7f] = " checked=\"checked\""; } } /* figure out inherited permissions */ | | | 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 | for(i=0; zCap[i]; i++){ char c = zCap[i]; if( c>='a' && c<='z' ) oa[c&0x7f] = " checked=\"checked\""; } } /* figure out inherited permissions */ memset((char *)inherit, 0, sizeof(inherit)); if( fossil_strcmp(zLogin, "developer") ){ char *z1, *z2; z1 = z2 = db_text(0,"SELECT cap FROM user WHERE login='developer'"); while( z1 && *z1 ){ inherit[0x7f & *(z1++)] = "<span class=\"ueditInheritDeveloper\"><sub>[D]</sub></span>"; } |
︙ | ︙ | |||
530 531 532 533 534 535 536 | "<span class=\"ueditInheritNobody\"><sub>[N]</sub></span>"; } free(z2); } /* Begin generating the page */ | | | | 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 | "<span class=\"ueditInheritNobody\"><sub>[N]</sub></span>"; } free(z2); } /* Begin generating the page */ style_submenu_element("Cancel", "setup_ulist"); if( uid ){ style_header("Edit User %h", zLogin); }else{ style_header("Add A New User"); } @ <div class="ueditCapBox"> @ <form action="%s(g.zPath)" method="post"><div> login_insert_csrf_secret(); if( login_is_special(zLogin) ){ @ <input type="hidden" name="login" value="%s(zLogin)"> @ <input type="hidden" name="info" value=""> @ <input type="hidden" name="pw" value="*"> } @ <script> @ function updateCapabilityString(){ @ /* @ ** This function updates the "#usetupEditCapability" span content @ ** with the capabilities selected by the interactive user, based @ ** upon the state of the capability checkboxes. @ */ @ try { |
︙ | ︙ | |||
683 684 685 686 687 688 689 690 691 692 693 694 695 696 | @ Moderate Tickets%s(B('q'))</label><br /> @ <label><input type="checkbox" name="at"%s(oa['t']) @ onchange="updateCapabilityString()" /> @ Ticket Report%s(B('t'))</label><br /> @ <label><input type="checkbox" name="ax"%s(oa['x']) @ onchange="updateCapabilityString()" /> @ Private%s(B('x'))</label><br /> @ <label><input type="checkbox" name="az"%s(oa['z']) @ onchange="updateCapabilityString()" /> @ Download Zip%s(B('z'))</label> @ </td></tr> @ </table> @ </td> @ </tr> | > > > | 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 | @ Moderate Tickets%s(B('q'))</label><br /> @ <label><input type="checkbox" name="at"%s(oa['t']) @ onchange="updateCapabilityString()" /> @ Ticket Report%s(B('t'))</label><br /> @ <label><input type="checkbox" name="ax"%s(oa['x']) @ onchange="updateCapabilityString()" /> @ Private%s(B('x'))</label><br /> @ <label><input type="checkbox" name="ay"%s(oa['y']) @ onchange="updateCapabilityString()" /> @ Write Unversioned%s(B('y'))</label><br /> @ <label><input type="checkbox" name="az"%s(oa['z']) @ onchange="updateCapabilityString()" /> @ Download Zip%s(B('z'))</label> @ </td></tr> @ </table> @ </td> @ </tr> |
︙ | ︙ | |||
728 729 730 731 732 733 734 | @ <td> </td> @ <td><input type="submit" name="submit" value="Apply Changes" /></td> @ </tr> } @ </table> @ </div></form> @ </div> | | | 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 | @ <td> </td> @ <td><input type="submit" name="submit" value="Apply Changes" /></td> @ </tr> } @ </table> @ </div></form> @ </div> @ <script>updateCapabilityString();</script> @ <h2>Privileges And Capabilities:</h2> @ <ul> if( higherUser ){ @ <li><p class="missingPriv"> @ User %h(zLogin) has Setup privileges and you only have Admin privileges @ so you are not permitted to make changes to %h(zLogin). @ </p></li> |
︙ | ︙ | |||
914 915 916 917 918 919 920 | login_verify_csrf_secret(); db_set(zVar, iQ ? "1" : "0", 0); admin_log("Set option [%q] to [%q].", zVar, iQ ? "on" : "off"); iVal = iQ; } } | | | | 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 | login_verify_csrf_secret(); db_set(zVar, iQ ? "1" : "0", 0); admin_log("Set option [%q] to [%q].", zVar, iQ ? "on" : "off"); iVal = iQ; } } @ <label><input type="checkbox" name="%s(zQParm)" if( iVal ){ @ checked="checked" } if( disabled ){ @ disabled="disabled" } @ /> <b>%s(zLabel)</b></label> } /* ** Generate an entry box for an attribute. */ void entry_attribute( const char *zLabel, /* The text label on the entry box */ |
︙ | ︙ | |||
1134 1135 1136 1137 1138 1139 1140 | @ <hr /> onoff_attribute( "Enable hyperlinks for \"nobody\" based on User-Agent and Javascript", "auto-hyperlink", "autohyperlink", 1, 0); @ <p>Enable hyperlinks (the equivalent of the "h" permission) for all users @ including user "nobody", as long as (1) the User-Agent string in the @ HTTP header indicates that the request is coming from an actual human | | | | | 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 | @ <hr /> onoff_attribute( "Enable hyperlinks for \"nobody\" based on User-Agent and Javascript", "auto-hyperlink", "autohyperlink", 1, 0); @ <p>Enable hyperlinks (the equivalent of the "h" permission) for all users @ including user "nobody", as long as (1) the User-Agent string in the @ HTTP header indicates that the request is coming from an actual human @ being and not a robot or spider and (2) the user agent is able to @ run Javascript in order to set the href= attribute of hyperlinks. Bots @ and spiders can forge a User-Agent string that makes them seem to be a @ normal browser and they can run javascript just like browsers. But most @ bots do not go to that much trouble so this is normally an effective @ defense.</p> @ @ <p>You do not normally want a bot to walk your entire repository because @ if it does, your server will end up computing diffs and annotations for @ every historical version of every file and creating ZIPs and tarballs of @ every historical check-in, which can use a lot of CPU and bandwidth @ even for relatively small projects.</p> @ @ <p>Additional parameters that control this behavior:</p> @ <blockquote> onoff_attribute("Enable hyperlinks for humans (as deduced from the UserAgent " " HTTP header string)", "auto-hyperlink-ishuman", "ahis", 0, 0); @ <br /> onoff_attribute("Require mouse movement before enabling hyperlinks", "auto-hyperlink-mouseover", "ahmo", 0, 0); @ <br /> entry_attribute("Delay before enabling hyperlinks (milliseconds)", 5, "auto-hyperlink-delay", "ah-delay", "10", 0); @ </blockquote> @ <p>Hyperlinks for user "nobody" are normally enabled as soon as the page @ finishes loading. But the first check-box below can be set to require mouse @ movement before enabling the links. One can also set a delay prior to enabling @ links by enter a positive number of milliseconds in the entry box above.</p> |
︙ | ︙ | |||
1312 1313 1314 1315 1316 1317 1318 | @ </table> @ @ <p><form action="%s(g.zTop)/setup_login_group" method="post"><div> login_insert_csrf_secret(); @ To leave this login group press @ <input type="submit" value="Leave Login Group" name="leave"> @ </form></p> | | | 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 | @ </table> @ @ <p><form action="%s(g.zTop)/setup_login_group" method="post"><div> login_insert_csrf_secret(); @ To leave this login group press @ <input type="submit" value="Leave Login Group" name="leave"> @ </form></p> @ <hr /><h2>Implementation Details</h2> @ <p>The following are fields from the CONFIG table related to login-groups, @ provided here for instructional and debugging purposes:</p> @ <table border='1' id='configTab'> @ <thead><tr><th>Config.Name<th>Config.Value<th>Config.mtime</tr></thead><tbody> db_prepare(&q, "SELECT name, value, datetime(mtime,'unixepoch') FROM config" " WHERE name GLOB 'peer-*'" " OR name GLOB 'project-*'" |
︙ | ︙ | |||
1401 1402 1403 1404 1405 1406 1407 | @ %s(zTmDiff) hours behind UTC.</p> }else{ @ %s(zTmDiff) hours ahead of UTC.</p> } @ <hr /> multiple_choice_attribute("Per-Item Time Format", "timeline-date-format", | | | 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 | @ %s(zTmDiff) hours behind UTC.</p> }else{ @ %s(zTmDiff) hours ahead of UTC.</p> } @ <hr /> multiple_choice_attribute("Per-Item Time Format", "timeline-date-format", "tdf", "0", count(azTimeFormats)/2, azTimeFormats); @ <p>If the "HH:MM" or "HH:MM:SS" format is selected, then the date is shown @ in a separate box (using CSS class "timelineDate") whenever the date changes. @ With the "YYYY-MM-DD HH:MM" and "YYMMDD ..." formats, the complete date @ and time is shown on every timeline entry (using the CSS class "timelineTime").</p> @ <hr /> onoff_attribute("Show version differences by default", |
︙ | ︙ | |||
1443 1444 1445 1446 1447 1448 1449 | login_check_credentials(); if( !g.perm.Setup ){ login_needed(0); return; } | < | 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 | login_check_credentials(); if( !g.perm.Setup ){ login_needed(0); return; } style_header("Settings"); if(!g.repositoryOpen){ /* Provide read-only access to versioned settings, but only if no repo file was explicitly provided. */ db_open_local(0); } db_begin_transaction(); |
︙ | ︙ | |||
1508 1509 1510 1511 1512 1513 1514 | @ </td></tr></table> @ </div></form> @ <p>Settings marked with (v) are 'versionable' and will be overridden @ by the contents of files named <tt>.fossil-settings/PROPERTY</tt> @ in the check-out root. @ If such a file is present, the corresponding field above is not @ editable.</p><hr /><p> | | < | | 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 | @ </td></tr></table> @ </div></form> @ <p>Settings marked with (v) are 'versionable' and will be overridden @ by the contents of files named <tt>.fossil-settings/PROPERTY</tt> @ in the check-out root. @ If such a file is present, the corresponding field above is not @ editable.</p><hr /><p> @ These settings work the same as the @ <a href='%R/help?cmd=settings'>fossil set</a> command. db_end_transaction(0); style_footer(); } /* ** WEBPAGE: setup_config ** |
︙ | ︙ | |||
1890 1891 1892 1893 1894 1895 1896 | @ <p><b>Caution:</b> There are no restrictions on the SQL that can be @ run by this page. You can do serious and irrepairable damage to the @ repository. Proceed with extreme caution.</p> @ @ <p>Only the first statement in the entry box will be run. @ Any subsequent statements will be silently ignored.</p> @ | | | | | < | | < | 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 | @ <p><b>Caution:</b> There are no restrictions on the SQL that can be @ run by this page. You can do serious and irrepairable damage to the @ repository. Proceed with extreme caution.</p> @ @ <p>Only the first statement in the entry box will be run. @ Any subsequent statements will be silently ignored.</p> @ @ <p>Database names:<ul><li>repository if( g.zConfigDbName ){ @ <li>configdb } if( g.localOpen ){ @ <li>localdb } @ </ul></p> @ @ <form method="post" action="%s(g.zTop)/admin_sql"> login_insert_csrf_secret(); @ SQL:<br /> @ <textarea name="q" rows="5" cols="80">%h(zQ)</textarea><br /> @ <input type="submit" name="go" value="Run SQL"> @ <input type="submit" name="schema" value="Show Schema"> @ <input type="submit" name="tablelist" value="List Tables"> @ </form> if( P("schema") ){ zQ = sqlite3_mprintf( "SELECT sql FROM repository.sqlite_master WHERE sql IS NOT NULL"); go = 1; }else if( P("tablelist") ){ zQ = sqlite3_mprintf( "SELECT name FROM repository.sqlite_master WHERE type='table'" " ORDER BY name"); go = 1; } if( go ){ sqlite3_stmt *pStmt; int rc; const char *zTail; int nCol; |
︙ | ︙ | |||
2151 2152 2153 2154 2155 2156 2157 | @ <td>Search nothing. (Disables document search).</tr> @ </table> @ <hr /> entry_attribute("Document Branch", 20, "doc-branch", "db", "trunk", 0); @ <p>When searching documents, use the versions of the files found at the @ type of the "Document Branch" branch. Recommended value: "trunk". @ Document search is disabled if blank. | | | | | | | | 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 | @ <td>Search nothing. (Disables document search).</tr> @ </table> @ <hr /> entry_attribute("Document Branch", 20, "doc-branch", "db", "trunk", 0); @ <p>When searching documents, use the versions of the files found at the @ type of the "Document Branch" branch. Recommended value: "trunk". @ Document search is disabled if blank. @ <hr /> onoff_attribute("Search Check-in Comments", "search-ci", "sc", 0, 0); @ <br /> onoff_attribute("Search Documents", "search-doc", "sd", 0, 0); @ <br /> onoff_attribute("Search Tickets", "search-tkt", "st", 0, 0); @ <br /> onoff_attribute("Search Wiki","search-wiki", "sw", 0, 0); @ <hr /> @ <p><input type="submit" name="submit" value="Apply Changes" /></p> @ <hr /> if( P("fts0") ){ search_drop_index(); }else if( P("fts1") ){ search_drop_index(); search_create_index(); search_fill_index(); search_update_index(search_restrict(SRCH_ALL)); |
︙ | ︙ |
Changes to src/shell.c.
︙ | ︙ | |||
139 140 141 142 143 144 145 146 147 148 149 150 151 152 | #if defined(_WIN32) || defined(WIN32) #include <windows.h> /* string conversion routines only needed on Win32 */ extern char *sqlite3_win32_unicode_to_utf8(LPCWSTR); extern char *sqlite3_win32_mbcs_to_utf8_v2(const char *, int); extern char *sqlite3_win32_utf8_to_mbcs_v2(const char *, int); #endif /* On Windows, we normally run with output mode of TEXT so that \n characters ** are automatically translated into \r\n. However, this behavior needs ** to be disabled in some cases (ex: when generating CSV output and when ** rendering quoted strings that contain \n characters). The following ** routines take care of that. | > | 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 | #if defined(_WIN32) || defined(WIN32) #include <windows.h> /* string conversion routines only needed on Win32 */ extern char *sqlite3_win32_unicode_to_utf8(LPCWSTR); extern char *sqlite3_win32_mbcs_to_utf8_v2(const char *, int); extern char *sqlite3_win32_utf8_to_mbcs_v2(const char *, int); extern LPWSTR sqlite3_win32_utf8_to_unicode(const char *zText); #endif /* On Windows, we normally run with output mode of TEXT so that \n characters ** are automatically translated into \r\n. However, this behavior needs ** to be disabled in some cases (ex: when generating CSV output and when ** rendering quoted strings that contain \n characters). The following ** routines take care of that. |
︙ | ︙ | |||
520 521 522 523 524 525 526 | zLine[n] = 0; break; } } #if defined(_WIN32) || defined(WIN32) /* For interactive input on Windows systems, translate the ** multi-byte characterset characters into UTF-8. */ | | | 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 | zLine[n] = 0; break; } } #if defined(_WIN32) || defined(WIN32) /* For interactive input on Windows systems, translate the ** multi-byte characterset characters into UTF-8. */ if( stdin_is_interactive && in==stdin ){ char *zTrans = sqlite3_win32_mbcs_to_utf8_v2(zLine, 0); if( zTrans ){ int nTrans = strlen30(zTrans)+1; if( nTrans>nLine ){ zLine = realloc(zLine, nTrans); if( zLine==0 ){ sqlite3_free(zTrans); |
︙ | ︙ | |||
622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 | FILE *traceOut; /* Output for sqlite3_trace() */ int nErr; /* Number of errors seen */ int mode; /* An output mode setting */ int cMode; /* temporary output mode for the current query */ int normalMode; /* Output mode before ".explain on" */ int writableSchema; /* True if PRAGMA writable_schema=ON */ int showHeader; /* True to show column names in List or Column mode */ unsigned shellFlgs; /* Various flags */ char *zDestTable; /* Name of destination table when MODE_Insert */ char colSeparator[20]; /* Column separator character for several modes */ char rowSeparator[20]; /* Row separator character for MODE_Ascii */ int colWidth[100]; /* Requested width of each column when in column mode*/ int actualWidth[100]; /* Actual width of each column */ char nullValue[20]; /* The text to print when a NULL comes back from ** the database */ char outfile[FILENAME_MAX]; /* Filename for *out */ | > > | 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 | FILE *traceOut; /* Output for sqlite3_trace() */ int nErr; /* Number of errors seen */ int mode; /* An output mode setting */ int cMode; /* temporary output mode for the current query */ int normalMode; /* Output mode before ".explain on" */ int writableSchema; /* True if PRAGMA writable_schema=ON */ int showHeader; /* True to show column names in List or Column mode */ int nCheck; /* Number of ".check" commands run */ unsigned shellFlgs; /* Various flags */ char *zDestTable; /* Name of destination table when MODE_Insert */ char zTestcase[30]; /* Name of current test case */ char colSeparator[20]; /* Column separator character for several modes */ char rowSeparator[20]; /* Row separator character for MODE_Ascii */ int colWidth[100]; /* Requested width of each column when in column mode*/ int actualWidth[100]; /* Actual width of each column */ char nullValue[20]; /* The text to print when a NULL comes back from ** the database */ char outfile[FILENAME_MAX]; /* Filename for *out */ |
︙ | ︙ | |||
661 662 663 664 665 666 667 | */ #define MODE_Line 0 /* One column per line. Blank line between records */ #define MODE_Column 1 /* One record per line in neat columns */ #define MODE_List 2 /* One record per line with a separator */ #define MODE_Semi 3 /* Same as MODE_List but append ";" to each line */ #define MODE_Html 4 /* Generate an XHTML table */ #define MODE_Insert 5 /* Generate SQL "insert" statements */ | > | | | | | > | 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 | */ #define MODE_Line 0 /* One column per line. Blank line between records */ #define MODE_Column 1 /* One record per line in neat columns */ #define MODE_List 2 /* One record per line with a separator */ #define MODE_Semi 3 /* Same as MODE_List but append ";" to each line */ #define MODE_Html 4 /* Generate an XHTML table */ #define MODE_Insert 5 /* Generate SQL "insert" statements */ #define MODE_Quote 6 /* Quote values as for SQL */ #define MODE_Tcl 7 /* Generate ANSI-C or TCL quoted elements */ #define MODE_Csv 8 /* Quote strings, numbers are plain */ #define MODE_Explain 9 /* Like MODE_Column, but do not truncate data */ #define MODE_Ascii 10 /* Use ASCII unit and record separators (0x1F/0x1E) */ #define MODE_Pretty 11 /* Pretty-print schemas */ static const char *modeDescr[] = { "line", "column", "list", "semi", "html", "insert", "quote", "tcl", "csv", "explain", "ascii", "prettyprint", }; |
︙ | ︙ | |||
890 891 892 893 894 895 896 897 898 899 900 901 902 903 | UNUSED_PARAMETER(NotUsed); seenInterrupt++; if( seenInterrupt>2 ) exit(1); if( globalDb ) sqlite3_interrupt(globalDb); } #endif /* ** When the ".auth ON" is set, the following authorizer callback is ** invoked. It always returns SQLITE_OK. */ static int shellAuth( void *pClientData, int op, | > | 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 | UNUSED_PARAMETER(NotUsed); seenInterrupt++; if( seenInterrupt>2 ) exit(1); if( globalDb ) sqlite3_interrupt(globalDb); } #endif #ifndef SQLITE_OMIT_AUTHORIZATION /* ** When the ".auth ON" is set, the following authorizer callback is ** invoked. It always returns SQLITE_OK. */ static int shellAuth( void *pClientData, int op, |
︙ | ︙ | |||
922 923 924 925 926 927 928 | }; int i; const char *az[4]; az[0] = zA1; az[1] = zA2; az[2] = zA3; az[3] = zA4; | | > | | 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 | }; int i; const char *az[4]; az[0] = zA1; az[1] = zA2; az[2] = zA3; az[3] = zA4; utf8_printf(p->out, "authorizer: %s", azAction[op]); for(i=0; i<4; i++){ raw_printf(p->out, " "); if( az[i] ){ output_c_string(p->out, az[i]); }else{ raw_printf(p->out, "NULL"); } } raw_printf(p->out, "\n"); return SQLITE_OK; } #endif /* ** This is the callback routine that the shell ** invokes for each row of a query result. */ static int shell_callback( void *pArg, |
︙ | ︙ | |||
1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 | output_csv(p, azArg[i], i<nArg-1); } utf8_printf(p->out, "%s", p->rowSeparator); } setTextMode(p->out, 1); break; } case MODE_Insert: { p->cnt++; if( azArg==0 ) break; | > > | | | | | | | | | | > | 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 | output_csv(p, azArg[i], i<nArg-1); } utf8_printf(p->out, "%s", p->rowSeparator); } setTextMode(p->out, 1); break; } case MODE_Quote: case MODE_Insert: { p->cnt++; if( azArg==0 ) break; if( p->cMode==MODE_Insert ){ utf8_printf(p->out,"INSERT INTO %s",p->zDestTable); if( p->showHeader ){ raw_printf(p->out,"("); for(i=0; i<nArg; i++){ char *zSep = i>0 ? ",": ""; utf8_printf(p->out, "%s%s", zSep, azCol[i]); } raw_printf(p->out,")"); } raw_printf(p->out," VALUES("); } for(i=0; i<nArg; i++){ char *zSep = i>0 ? ",": ""; if( (azArg[i]==0) || (aiType && aiType[i]==SQLITE_NULL) ){ utf8_printf(p->out,"%sNULL",zSep); }else if( aiType && aiType[i]==SQLITE_TEXT ){ if( zSep[0] ) utf8_printf(p->out,"%s",zSep); output_quoted_string(p->out, azArg[i]); |
︙ | ︙ | |||
1224 1225 1226 1227 1228 1229 1230 | }else if( isNumber(azArg[i], 0) ){ utf8_printf(p->out,"%s%s",zSep, azArg[i]); }else{ if( zSep[0] ) utf8_printf(p->out,"%s",zSep); output_quoted_string(p->out, azArg[i]); } } | | | 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 | }else if( isNumber(azArg[i], 0) ){ utf8_printf(p->out,"%s%s",zSep, azArg[i]); }else{ if( zSep[0] ) utf8_printf(p->out,"%s",zSep); output_quoted_string(p->out, azArg[i]); } } raw_printf(p->out,p->cMode==MODE_Quote?"\n":");\n"); break; } case MODE_Ascii: { if( p->cnt++==0 && p->showHeader ){ for(i=0; i<nArg; i++){ if( i>0 ) utf8_printf(p->out, "%s", p->colSeparator); utf8_printf(p->out,"%s",azCol[i] ? azCol[i] : ""); |
︙ | ︙ | |||
1438 1439 1440 1441 1442 1443 1444 | { "write_bytes: ", "Bytes written to storage:" }, { "cancelled_write_bytes: ", "Cancelled write bytes:" }, }; int i; for(i=0; i<ArraySize(aTrans); i++){ int n = (int)strlen(aTrans[i].zPattern); if( strncmp(aTrans[i].zPattern, z, n)==0 ){ | | | 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 | { "write_bytes: ", "Bytes written to storage:" }, { "cancelled_write_bytes: ", "Cancelled write bytes:" }, }; int i; for(i=0; i<ArraySize(aTrans); i++){ int n = (int)strlen(aTrans[i].zPattern); if( strncmp(aTrans[i].zPattern, z, n)==0 ){ utf8_printf(out, "%-36s %s", aTrans[i].zDesc, &z[n]); break; } } } fclose(in); } #endif |
︙ | ︙ | |||
2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 | return rc; } /* ** Text of a help message */ static char zHelp[] = ".auth ON|OFF Show authorizer callbacks\n" ".backup ?DB? FILE Backup DB (default \"main\") to FILE\n" ".bail on|off Stop after hitting an error. Default OFF\n" ".binary on|off Turn binary output on or off. Default OFF\n" ".changes on|off Show number of rows changed by SQL\n" ".clone NEWDB Clone data into NEWDB from the existing database\n" ".databases List names and files of attached databases\n" ".dbinfo ?DB? Show status information about the database\n" ".dump ?TABLE? ... Dump the database in an SQL text format\n" " If TABLE specified, only dump tables matching\n" " LIKE pattern TABLE.\n" ".echo on|off Turn command echo on or off\n" | > > > | 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 | return rc; } /* ** Text of a help message */ static char zHelp[] = #ifndef SQLITE_OMIT_AUTHORIZATION ".auth ON|OFF Show authorizer callbacks\n" #endif ".backup ?DB? FILE Backup DB (default \"main\") to FILE\n" ".bail on|off Stop after hitting an error. Default OFF\n" ".binary on|off Turn binary output on or off. Default OFF\n" ".changes on|off Show number of rows changed by SQL\n" ".check GLOB Fail if output since .testcase does not match\n" ".clone NEWDB Clone data into NEWDB from the existing database\n" ".databases List names and files of attached databases\n" ".dbinfo ?DB? Show status information about the database\n" ".dump ?TABLE? ... Dump the database in an SQL text format\n" " If TABLE specified, only dump tables matching\n" " LIKE pattern TABLE.\n" ".echo on|off Turn command echo on or off\n" |
︙ | ︙ | |||
2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 | " ascii Columns/rows delimited by 0x1F and 0x1E\n" " csv Comma-separated values\n" " column Left-aligned columns. (See .width)\n" " html HTML <table> code\n" " insert SQL insert statements for TABLE\n" " line One value per line\n" " list Values delimited by .separator strings\n" " tabs Tab-separated values\n" " tcl TCL list elements\n" ".nullvalue STRING Use STRING in place of NULL values\n" ".once FILENAME Output for the next SQL command only to FILENAME\n" | > | > | 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 | " ascii Columns/rows delimited by 0x1F and 0x1E\n" " csv Comma-separated values\n" " column Left-aligned columns. (See .width)\n" " html HTML <table> code\n" " insert SQL insert statements for TABLE\n" " line One value per line\n" " list Values delimited by .separator strings\n" " quote Escape answers as for SQL\n" " tabs Tab-separated values\n" " tcl TCL list elements\n" ".nullvalue STRING Use STRING in place of NULL values\n" ".once FILENAME Output for the next SQL command only to FILENAME\n" ".open ?--new? ?FILE? Close existing database and reopen FILE\n" " The --new starts with an empty file\n" ".output ?FILENAME? Send output to FILENAME or stdout\n" ".print STRING... Print literal STRING\n" ".prompt MAIN CONTINUE Replace the standard prompts\n" ".quit Exit this program\n" ".read FILENAME Execute SQL in FILENAME\n" ".restore ?DB? FILE Restore content of DB (default \"main\") from FILE\n" ".save FILE Write in-memory database into FILE\n" |
︙ | ︙ | |||
2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 | ".shell CMD ARGS... Run CMD ARGS... in a system shell\n" ".show Show the current values for various settings\n" ".stats ?on|off? Show stats or turn stats on or off\n" ".system CMD ARGS... Run CMD ARGS... in a system shell\n" ".tables ?TABLE? List names of tables\n" " If TABLE specified, only list tables matching\n" " LIKE pattern TABLE.\n" ".timeout MS Try opening locked tables for MS milliseconds\n" ".timer on|off Turn SQL timer on or off\n" ".trace FILE|off Output each SQL statement as it is run\n" ".vfsinfo ?AUX? Information about the top-level VFS\n" ".vfslist List all available VFSes\n" ".vfsname ?AUX? Print the name of the VFS stack\n" ".width NUM1 NUM2 ... Set column widths for \"column\" mode\n" | > | 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 | ".shell CMD ARGS... Run CMD ARGS... in a system shell\n" ".show Show the current values for various settings\n" ".stats ?on|off? Show stats or turn stats on or off\n" ".system CMD ARGS... Run CMD ARGS... in a system shell\n" ".tables ?TABLE? List names of tables\n" " If TABLE specified, only list tables matching\n" " LIKE pattern TABLE.\n" ".testcase NAME Begin redirecting output to 'testcase-out.txt'\n" ".timeout MS Try opening locked tables for MS milliseconds\n" ".timer on|off Turn SQL timer on or off\n" ".trace FILE|off Output each SQL statement as it is run\n" ".vfsinfo ?AUX? Information about the top-level VFS\n" ".vfslist List all available VFSes\n" ".vfsname ?AUX? Print the name of the VFS stack\n" ".width NUM1 NUM2 ... Set column widths for \"column\" mode\n" |
︙ | ︙ | |||
2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 | ); } #endif /* Forward reference */ static int process_input(ShellState *p, FILE *in); /* ** Implementation of the "readfile(X)" SQL function. The entire content ** of the file named X is read and returned as a BLOB. NULL is returned ** if the file does not exist or is unreadable. */ static void readfileFunc( sqlite3_context *context, int argc, sqlite3_value **argv ){ const char *zName; | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > < < | < < < < < < | < < < < | 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 | ); } #endif /* Forward reference */ static int process_input(ShellState *p, FILE *in); /* ** Read the content of a file into memory obtained from sqlite3_malloc64(). ** The caller is responsible for freeing the memory. ** ** NULL is returned if any error is encountered. */ static char *readFile(const char *zName){ FILE *in = fopen(zName, "rb"); long nIn; size_t nRead; char *pBuf; if( in==0 ) return 0; fseek(in, 0, SEEK_END); nIn = ftell(in); rewind(in); pBuf = sqlite3_malloc64( nIn+1 ); if( pBuf==0 ) return 0; nRead = fread(pBuf, nIn, 1, in); fclose(in); if( nRead!=1 ){ sqlite3_free(pBuf); return 0; } pBuf[nIn] = 0; return pBuf; } /* ** Implementation of the "readfile(X)" SQL function. The entire content ** of the file named X is read and returned as a BLOB. NULL is returned ** if the file does not exist or is unreadable. */ static void readfileFunc( sqlite3_context *context, int argc, sqlite3_value **argv ){ const char *zName; void *pBuf; UNUSED_PARAMETER(argc); zName = (const char*)sqlite3_value_text(argv[0]); if( zName==0 ) return; pBuf = readFile(zName); if( pBuf ) sqlite3_result_blob(context, pBuf, -1, sqlite3_free); } /* ** Implementation of the "writefile(X,Y)" SQL function. The argument Y ** is written into file X. The number of bytes written is returned. Or ** NULL is returned if something goes wrong, such as being unable to open ** file X for writing. |
︙ | ︙ | |||
2539 2540 2541 2542 2543 2544 2545 | } return f; } /* ** A routine for handling output from sqlite3_trace(). */ | | > > > > > > > > > | 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 | } return f; } /* ** A routine for handling output from sqlite3_trace(). */ static int sql_trace_callback( unsigned mType, void *pArg, void *pP, void *pX ){ FILE *f = (FILE*)pArg; UNUSED_PARAMETER(mType); UNUSED_PARAMETER(pP); if( f ){ const char *z = (const char*)pX; int i = (int)strlen(z); while( i>0 && z[i-1]==';' ){ i--; } utf8_printf(f, "%.*s;\n", i, z); } return 0; } /* ** A no-op routine that runs with the ".breakpoint" doc-command. This is ** a useful spot to set a debugger breakpoint. */ static void test_breakpoint(void){ |
︙ | ︙ | |||
2942 2943 2944 2945 2946 2947 2948 | sqlite3_finalize(pStmt); return res; } /* ** Convert a 2-byte or 4-byte big-endian integer into a native integer */ | | | | 2984 2985 2986 2987 2988 2989 2990 2991 2992 2993 2994 2995 2996 2997 2998 2999 3000 3001 | sqlite3_finalize(pStmt); return res; } /* ** Convert a 2-byte or 4-byte big-endian integer into a native integer */ static unsigned int get2byteInt(unsigned char *a){ return (a[0]<<8) + a[1]; } static unsigned int get4byteInt(unsigned char *a){ return (a[0]<<24) + (a[1]<<16) + (a[2]<<8) + a[3]; } /* ** Implementation of the ".info" command. ** ** Return 1 on error, 2 to exit, and 0 otherwise. |
︙ | ︙ | |||
3049 3050 3051 3052 3053 3054 3055 3056 3057 3058 3059 3060 3061 3062 3063 3064 3065 3066 3067 3068 3069 3070 3071 3072 3073 | /* ** Print an out-of-memory message to stderr and return 1. */ static int shellNomemError(void){ raw_printf(stderr, "Error: out of memory\n"); return 1; } /* ** Compare the string as a command-line option with either one or two ** initial "-" characters. */ static int optionMatch(const char *zStr, const char *zOpt){ if( zStr[0]!='-' ) return 0; zStr++; if( zStr[0]=='-' ) zStr++; return strcmp(zStr, zOpt)==0; } /* ** If an input line begins with "." then invoke this routine to ** process that line. ** ** Return 1 on error, 2 to exit, and 0 otherwise. */ | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 3091 3092 3093 3094 3095 3096 3097 3098 3099 3100 3101 3102 3103 3104 3105 3106 3107 3108 3109 3110 3111 3112 3113 3114 3115 3116 3117 3118 3119 3120 3121 3122 3123 3124 3125 3126 3127 3128 3129 3130 3131 3132 3133 3134 3135 3136 3137 3138 3139 3140 3141 3142 3143 3144 3145 3146 3147 3148 3149 3150 3151 3152 3153 3154 3155 3156 3157 3158 3159 3160 3161 3162 3163 3164 3165 3166 3167 3168 3169 3170 3171 3172 3173 3174 3175 3176 3177 3178 3179 3180 3181 3182 3183 3184 3185 3186 3187 3188 3189 3190 3191 3192 3193 3194 3195 3196 3197 3198 3199 3200 3201 3202 3203 3204 3205 3206 3207 3208 3209 3210 3211 3212 3213 3214 3215 3216 3217 3218 3219 3220 3221 3222 3223 3224 3225 3226 3227 3228 | /* ** Print an out-of-memory message to stderr and return 1. */ static int shellNomemError(void){ raw_printf(stderr, "Error: out of memory\n"); return 1; } /* ** Compare the pattern in zGlob[] against the text in z[]. Return TRUE ** if they match and FALSE (0) if they do not match. ** ** Globbing rules: ** ** '*' Matches any sequence of zero or more characters. ** ** '?' Matches exactly one character. ** ** [...] Matches one character from the enclosed list of ** characters. ** ** [^...] Matches one character not in the enclosed list. ** ** '#' Matches any sequence of one or more digits with an ** optional + or - sign in front ** ** ' ' Any span of whitespace matches any other span of ** whitespace. ** ** Extra whitespace at the end of z[] is ignored. */ static int testcase_glob(const char *zGlob, const char *z){ int c, c2; int invert; int seen; while( (c = (*(zGlob++)))!=0 ){ if( IsSpace(c) ){ if( !IsSpace(*z) ) return 0; while( IsSpace(*zGlob) ) zGlob++; while( IsSpace(*z) ) z++; }else if( c=='*' ){ while( (c=(*(zGlob++))) == '*' || c=='?' ){ if( c=='?' && (*(z++))==0 ) return 0; } if( c==0 ){ return 1; }else if( c=='[' ){ while( *z && testcase_glob(zGlob-1,z)==0 ){ z++; } return (*z)!=0; } while( (c2 = (*(z++)))!=0 ){ while( c2!=c ){ c2 = *(z++); if( c2==0 ) return 0; } if( testcase_glob(zGlob,z) ) return 1; } return 0; }else if( c=='?' ){ if( (*(z++))==0 ) return 0; }else if( c=='[' ){ int prior_c = 0; seen = 0; invert = 0; c = *(z++); if( c==0 ) return 0; c2 = *(zGlob++); if( c2=='^' ){ invert = 1; c2 = *(zGlob++); } if( c2==']' ){ if( c==']' ) seen = 1; c2 = *(zGlob++); } while( c2 && c2!=']' ){ if( c2=='-' && zGlob[0]!=']' && zGlob[0]!=0 && prior_c>0 ){ c2 = *(zGlob++); if( c>=prior_c && c<=c2 ) seen = 1; prior_c = 0; }else{ if( c==c2 ){ seen = 1; } prior_c = c2; } c2 = *(zGlob++); } if( c2==0 || (seen ^ invert)==0 ) return 0; }else if( c=='#' ){ if( (z[0]=='-' || z[0]=='+') && IsDigit(z[1]) ) z++; if( !IsDigit(z[0]) ) return 0; z++; while( IsDigit(z[0]) ){ z++; } }else{ if( c!=(*(z++)) ) return 0; } } while( IsSpace(*z) ){ z++; } return *z==0; } /* ** Compare the string as a command-line option with either one or two ** initial "-" characters. */ static int optionMatch(const char *zStr, const char *zOpt){ if( zStr[0]!='-' ) return 0; zStr++; if( zStr[0]=='-' ) zStr++; return strcmp(zStr, zOpt)==0; } /* ** Delete a file. */ int shellDeleteFile(const char *zFilename){ int rc; #ifdef _WIN32 wchar_t *z = sqlite3_win32_utf8_to_unicode(zFilename); rc = _wunlink(z); sqlite3_free(z); #else rc = unlink(zFilename); #endif return rc; } /* ** If an input line begins with "." then invoke this routine to ** process that line. ** ** Return 1 on error, 2 to exit, and 0 otherwise. */ |
︙ | ︙ | |||
3104 3105 3106 3107 3108 3109 3110 3111 3112 3113 3114 3115 3116 3117 3118 3119 3120 3121 3122 3123 3124 3125 3126 3127 3128 3129 3130 | /* Process the input line. */ if( nArg==0 ) return 0; /* no tokens, no error */ n = strlen30(azArg[0]); c = azArg[0][0]; if( c=='a' && strncmp(azArg[0], "auth", n)==0 ){ if( nArg!=2 ){ raw_printf(stderr, "Usage: .auth ON|OFF\n"); rc = 1; goto meta_command_exit; } open_db(p, 0); if( booleanValue(azArg[1]) ){ sqlite3_set_authorizer(p->db, shellAuth, p); }else{ sqlite3_set_authorizer(p->db, 0, 0); } }else if( (c=='b' && n>=3 && strncmp(azArg[0], "backup", n)==0) || (c=='s' && n>=3 && strncmp(azArg[0], "save", n)==0) ){ const char *zDestFile = 0; const char *zDb = 0; sqlite3 *pDest; | > > | 3259 3260 3261 3262 3263 3264 3265 3266 3267 3268 3269 3270 3271 3272 3273 3274 3275 3276 3277 3278 3279 3280 3281 3282 3283 3284 3285 3286 3287 | /* Process the input line. */ if( nArg==0 ) return 0; /* no tokens, no error */ n = strlen30(azArg[0]); c = azArg[0][0]; #ifndef SQLITE_OMIT_AUTHORIZATION if( c=='a' && strncmp(azArg[0], "auth", n)==0 ){ if( nArg!=2 ){ raw_printf(stderr, "Usage: .auth ON|OFF\n"); rc = 1; goto meta_command_exit; } open_db(p, 0); if( booleanValue(azArg[1]) ){ sqlite3_set_authorizer(p->db, shellAuth, p); }else{ sqlite3_set_authorizer(p->db, 0, 0); } }else #endif if( (c=='b' && n>=3 && strncmp(azArg[0], "backup", n)==0) || (c=='s' && n>=3 && strncmp(azArg[0], "save", n)==0) ){ const char *zDestFile = 0; const char *zDb = 0; sqlite3 *pDest; |
︙ | ︙ | |||
3211 3212 3213 3214 3215 3216 3217 3218 3219 3220 3221 3222 3223 3224 | if( nArg==2 ){ p->countChanges = booleanValue(azArg[1]); }else{ raw_printf(stderr, "Usage: .changes on|off\n"); rc = 1; } }else if( c=='c' && strncmp(azArg[0], "clone", n)==0 ){ if( nArg==2 ){ tryToClone(p, azArg[1]); }else{ raw_printf(stderr, "Usage: .clone FILENAME\n"); rc = 1; | > > > > > > > > > > > > > > > > > > > > > > > > > | 3368 3369 3370 3371 3372 3373 3374 3375 3376 3377 3378 3379 3380 3381 3382 3383 3384 3385 3386 3387 3388 3389 3390 3391 3392 3393 3394 3395 3396 3397 3398 3399 3400 3401 3402 3403 3404 3405 3406 | if( nArg==2 ){ p->countChanges = booleanValue(azArg[1]); }else{ raw_printf(stderr, "Usage: .changes on|off\n"); rc = 1; } }else /* Cancel output redirection, if it is currently set (by .testcase) ** Then read the content of the testcase-out.txt file and compare against ** azArg[1]. If there are differences, report an error and exit. */ if( c=='c' && n>=3 && strncmp(azArg[0], "check", n)==0 ){ char *zRes = 0; output_reset(p); if( nArg!=2 ){ raw_printf(stderr, "Usage: .check GLOB-PATTERN\n"); rc = 2; }else if( (zRes = readFile("testcase-out.txt"))==0 ){ raw_printf(stderr, "Error: cannot read 'testcase-out.txt'\n"); rc = 2; }else if( testcase_glob(azArg[1],zRes)==0 ){ utf8_printf(stderr, "testcase-%s FAILED\n Expected: [%s]\n Got: [%s]\n", p->zTestcase, azArg[1], zRes); rc = 2; }else{ utf8_printf(stdout, "testcase-%s ok\n", p->zTestcase); p->nCheck++; } sqlite3_free(zRes); }else if( c=='c' && strncmp(azArg[0], "clone", n)==0 ){ if( nArg==2 ){ tryToClone(p, azArg[1]); }else{ raw_printf(stderr, "Usage: .clone FILENAME\n"); rc = 1; |
︙ | ︙ | |||
3797 3798 3799 3800 3801 3802 3803 3804 3805 3806 3807 3808 3809 3810 | sqlite3_snprintf(sizeof(p->rowSeparator), p->rowSeparator, SEP_CrLf); }else if( c2=='t' && strncmp(azArg[1],"tabs",n2)==0 ){ p->mode = MODE_List; sqlite3_snprintf(sizeof(p->colSeparator), p->colSeparator, SEP_Tab); }else if( c2=='i' && strncmp(azArg[1],"insert",n2)==0 ){ p->mode = MODE_Insert; set_table_name(p, nArg>=3 ? azArg[2] : "table"); }else if( c2=='a' && strncmp(azArg[1],"ascii",n2)==0 ){ p->mode = MODE_Ascii; sqlite3_snprintf(sizeof(p->colSeparator), p->colSeparator, SEP_Unit); sqlite3_snprintf(sizeof(p->rowSeparator), p->rowSeparator, SEP_Record); }else { raw_printf(stderr, "Error: mode should be one of: " "ascii column csv html insert line list tabs tcl\n"); | > > | 3979 3980 3981 3982 3983 3984 3985 3986 3987 3988 3989 3990 3991 3992 3993 3994 | sqlite3_snprintf(sizeof(p->rowSeparator), p->rowSeparator, SEP_CrLf); }else if( c2=='t' && strncmp(azArg[1],"tabs",n2)==0 ){ p->mode = MODE_List; sqlite3_snprintf(sizeof(p->colSeparator), p->colSeparator, SEP_Tab); }else if( c2=='i' && strncmp(azArg[1],"insert",n2)==0 ){ p->mode = MODE_Insert; set_table_name(p, nArg>=3 ? azArg[2] : "table"); }else if( c2=='q' && strncmp(azArg[1],"quote",n2)==0 ){ p->mode = MODE_Quote; }else if( c2=='a' && strncmp(azArg[1],"ascii",n2)==0 ){ p->mode = MODE_Ascii; sqlite3_snprintf(sizeof(p->colSeparator), p->colSeparator, SEP_Unit); sqlite3_snprintf(sizeof(p->rowSeparator), p->rowSeparator, SEP_Record); }else { raw_printf(stderr, "Error: mode should be one of: " "ascii column csv html insert line list tabs tcl\n"); |
︙ | ︙ | |||
3820 3821 3822 3823 3824 3825 3826 | }else{ raw_printf(stderr, "Usage: .nullvalue STRING\n"); rc = 1; } }else if( c=='o' && strncmp(azArg[0], "open", n)==0 && n>=2 ){ | < < | > > > > > > > > > > > > > > > > > > > | > > | | | < < > | < | | > > | > | > | 4004 4005 4006 4007 4008 4009 4010 4011 4012 4013 4014 4015 4016 4017 4018 4019 4020 4021 4022 4023 4024 4025 4026 4027 4028 4029 4030 4031 4032 4033 4034 4035 4036 4037 4038 4039 4040 4041 4042 4043 4044 4045 4046 4047 4048 4049 4050 4051 4052 4053 4054 | }else{ raw_printf(stderr, "Usage: .nullvalue STRING\n"); rc = 1; } }else if( c=='o' && strncmp(azArg[0], "open", n)==0 && n>=2 ){ char *zNewFilename; /* Name of the database file to open */ int iName = 1; /* Index in azArg[] of the filename */ int newFlag = 0; /* True to delete file before opening */ /* Close the existing database */ session_close_all(p); sqlite3_close(p->db); p->db = 0; sqlite3_free(p->zFreeOnClose); p->zFreeOnClose = 0; /* Check for command-line arguments */ for(iName=1; iName<nArg && azArg[iName][0]=='-'; iName++){ const char *z = azArg[iName]; if( optionMatch(z,"new") ){ newFlag = 1; }else if( z[0]=='-' ){ utf8_printf(stderr, "unknown option: %s\n", z); rc = 1; goto meta_command_exit; } } /* If a filename is specified, try to open it first */ zNewFilename = nArg>iName ? sqlite3_mprintf("%s", azArg[iName]) : 0; if( zNewFilename ){ if( newFlag ) shellDeleteFile(zNewFilename); p->zDbFilename = zNewFilename; open_db(p, 1); if( p->db==0 ){ utf8_printf(stderr, "Error: cannot open '%s'\n", zNewFilename); sqlite3_free(zNewFilename); }else{ p->zFreeOnClose = zNewFilename; } } if( p->db==0 ){ /* As a fall-back open a TEMP database */ p->zDbFilename = 0; open_db(p, 0); } }else if( c=='o' && (strncmp(azArg[0], "output", n)==0 || strncmp(azArg[0], "once", n)==0) ){ const char *zFile = nArg>=2 ? azArg[1] : "stdout"; |
︙ | ︙ | |||
4365 4366 4367 4368 4369 4370 4371 4372 4373 4374 4375 4376 4377 4378 | raw_printf(p->out, "\n"); utf8_printf(p->out, "%12.12s: %s\n","stats", azBool[p->statsOn!=0]); utf8_printf(p->out, "%12.12s: ", "width"); for (i=0;i<(int)ArraySize(p->colWidth) && p->colWidth[i] != 0;i++) { raw_printf(p->out, "%d ", p->colWidth[i]); } raw_printf(p->out, "\n"); }else if( c=='s' && strncmp(azArg[0], "stats", n)==0 ){ if( nArg==2 ){ p->statsOn = booleanValue(azArg[1]); }else if( nArg==1 ){ display_stats(p->db, p, 0); | > > | 4570 4571 4572 4573 4574 4575 4576 4577 4578 4579 4580 4581 4582 4583 4584 4585 | raw_printf(p->out, "\n"); utf8_printf(p->out, "%12.12s: %s\n","stats", azBool[p->statsOn!=0]); utf8_printf(p->out, "%12.12s: ", "width"); for (i=0;i<(int)ArraySize(p->colWidth) && p->colWidth[i] != 0;i++) { raw_printf(p->out, "%d ", p->colWidth[i]); } raw_printf(p->out, "\n"); utf8_printf(p->out, "%12.12s: %s\n", "filename", p->zDbFilename ? p->zDbFilename : ""); }else if( c=='s' && strncmp(azArg[0], "stats", n)==0 ){ if( nArg==2 ){ p->statsOn = booleanValue(azArg[1]); }else if( nArg==1 ){ display_stats(p->db, p, 0); |
︙ | ︙ | |||
4481 4482 4483 4484 4485 4486 4487 4488 4489 4490 4491 4492 4493 4494 | raw_printf(p->out, "\n"); } } for(ii=0; ii<nRow; ii++) sqlite3_free(azResult[ii]); sqlite3_free(azResult); }else if( c=='t' && n>=8 && strncmp(azArg[0], "testctrl", n)==0 && nArg>=2 ){ static const struct { const char *zCtrlName; /* Name of a test-control option */ int ctrlCode; /* Integer code for that option */ } aCtrl[] = { { "prng_save", SQLITE_TESTCTRL_PRNG_SAVE }, | > > > > > > > > > > > > > > | 4688 4689 4690 4691 4692 4693 4694 4695 4696 4697 4698 4699 4700 4701 4702 4703 4704 4705 4706 4707 4708 4709 4710 4711 4712 4713 4714 4715 | raw_printf(p->out, "\n"); } } for(ii=0; ii<nRow; ii++) sqlite3_free(azResult[ii]); sqlite3_free(azResult); }else /* Begin redirecting output to the file "testcase-out.txt" */ if( c=='t' && strcmp(azArg[0],"testcase")==0 ){ output_reset(p); p->out = output_file_open("testcase-out.txt"); if( p->out==0 ){ utf8_printf(stderr, "Error: cannot open 'testcase-out.txt'\n"); } if( nArg>=2 ){ sqlite3_snprintf(sizeof(p->zTestcase), p->zTestcase, "%s", azArg[1]); }else{ sqlite3_snprintf(sizeof(p->zTestcase), p->zTestcase, "?"); } }else if( c=='t' && n>=8 && strncmp(azArg[0], "testctrl", n)==0 && nArg>=2 ){ static const struct { const char *zCtrlName; /* Name of a test-control option */ int ctrlCode; /* Integer code for that option */ } aCtrl[] = { { "prng_save", SQLITE_TESTCTRL_PRNG_SAVE }, |
︙ | ︙ | |||
4651 4652 4653 4654 4655 4656 4657 | rc = 1; goto meta_command_exit; } output_file_close(p->traceOut); p->traceOut = output_file_open(azArg[1]); #if !defined(SQLITE_OMIT_TRACE) && !defined(SQLITE_OMIT_FLOATING_POINT) if( p->traceOut==0 ){ | | | | 4872 4873 4874 4875 4876 4877 4878 4879 4880 4881 4882 4883 4884 4885 4886 4887 4888 | rc = 1; goto meta_command_exit; } output_file_close(p->traceOut); p->traceOut = output_file_open(azArg[1]); #if !defined(SQLITE_OMIT_TRACE) && !defined(SQLITE_OMIT_FLOATING_POINT) if( p->traceOut==0 ){ sqlite3_trace_v2(p->db, 0, 0, 0); }else{ sqlite3_trace_v2(p->db, SQLITE_TRACE_STMT, sql_trace_callback,p->traceOut); } #endif }else #if SQLITE_USER_AUTHENTICATION if( c=='u' && strncmp(azArg[0], "user", n)==0 ){ if( nArg<2 ){ |
︙ | ︙ | |||
4892 4893 4894 4895 4896 4897 4898 | int startline = 0; /* Line number for start of current input */ while( errCnt==0 || !bail_on_error || (in==0 && stdin_is_interactive) ){ fflush(p->out); zLine = one_input_line(in, zLine, nSql>0); if( zLine==0 ){ /* End of input */ | | | 5113 5114 5115 5116 5117 5118 5119 5120 5121 5122 5123 5124 5125 5126 5127 | int startline = 0; /* Line number for start of current input */ while( errCnt==0 || !bail_on_error || (in==0 && stdin_is_interactive) ){ fflush(p->out); zLine = one_input_line(in, zLine, nSql>0); if( zLine==0 ){ /* End of input */ if( in==0 && stdin_is_interactive ) printf("\n"); break; } if( seenInterrupt ){ if( in!=0 ) break; seenInterrupt = 0; } lineno++; |
︙ | ︙ | |||
4992 4993 4994 4995 4996 4997 4998 | return errCnt>0; } /* ** Return a pathname which is the user's home directory. A ** 0 return indicates an error of some kind. */ | | > > > > > | 5213 5214 5215 5216 5217 5218 5219 5220 5221 5222 5223 5224 5225 5226 5227 5228 5229 5230 5231 5232 5233 | return errCnt>0; } /* ** Return a pathname which is the user's home directory. A ** 0 return indicates an error of some kind. */ static char *find_home_dir(int clearFlag){ static char *home_dir = NULL; if( clearFlag ){ free(home_dir); home_dir = 0; return 0; } if( home_dir ) return home_dir; #if !defined(_WIN32) && !defined(WIN32) && !defined(_WIN32_WCE) \ && !defined(__RTP__) && !defined(_WRS_KERNEL) { struct passwd *pwent; uid_t uid = getuid(); |
︙ | ︙ | |||
5068 5069 5070 5071 5072 5073 5074 | ){ char *home_dir = NULL; const char *sqliterc = sqliterc_override; char *zBuf = 0; FILE *in = NULL; if (sqliterc == NULL) { | | | 5294 5295 5296 5297 5298 5299 5300 5301 5302 5303 5304 5305 5306 5307 5308 | ){ char *home_dir = NULL; const char *sqliterc = sqliterc_override; char *zBuf = 0; FILE *in = NULL; if (sqliterc == NULL) { home_dir = find_home_dir(0); if( home_dir==0 ){ raw_printf(stderr, "-- warning: cannot find home directory;" " cannot read ~/.sqliterc\n"); return; } sqlite3_initialize(); zBuf = sqlite3_mprintf("%s/.sqliterc",home_dir); |
︙ | ︙ | |||
5313 5314 5315 5316 5317 5318 5319 5320 5321 5322 5323 5324 5325 5326 | const char *zSize; sqlite3_int64 szHeap; zSize = cmdline_option_value(argc, argv, ++i); szHeap = integerValue(zSize); if( szHeap>0x7fff0000 ) szHeap = 0x7fff0000; sqlite3_config(SQLITE_CONFIG_HEAP, malloc((int)szHeap), (int)szHeap, 64); #endif }else if( strcmp(z,"-scratch")==0 ){ int n, sz; sz = (int)integerValue(cmdline_option_value(argc,argv,++i)); if( sz>400000 ) sz = 400000; if( sz<2500 ) sz = 2500; n = (int)integerValue(cmdline_option_value(argc,argv,++i)); | > > | 5539 5540 5541 5542 5543 5544 5545 5546 5547 5548 5549 5550 5551 5552 5553 5554 | const char *zSize; sqlite3_int64 szHeap; zSize = cmdline_option_value(argc, argv, ++i); szHeap = integerValue(zSize); if( szHeap>0x7fff0000 ) szHeap = 0x7fff0000; sqlite3_config(SQLITE_CONFIG_HEAP, malloc((int)szHeap), (int)szHeap, 64); #else (void)cmdline_option_value(argc, argv, ++i); #endif }else if( strcmp(z,"-scratch")==0 ){ int n, sz; sz = (int)integerValue(cmdline_option_value(argc,argv,++i)); if( sz>400000 ) sz = 400000; if( sz<2500 ) sz = 2500; n = (int)integerValue(cmdline_option_value(argc,argv,++i)); |
︙ | ︙ | |||
5554 5555 5556 5557 5558 5559 5560 | ); if( warnInmemoryDb ){ printf("Connected to a "); printBold("transient in-memory database"); printf(".\nUse \".open FILENAME\" to reopen on a " "persistent database.\n"); } | | | 5782 5783 5784 5785 5786 5787 5788 5789 5790 5791 5792 5793 5794 5795 5796 | ); if( warnInmemoryDb ){ printf("Connected to a "); printBold("transient in-memory database"); printf(".\nUse \".open FILENAME\" to reopen on a " "persistent database.\n"); } zHome = find_home_dir(0); if( zHome ){ nHistory = strlen30(zHome) + 20; if( (zHistory = malloc(nHistory))!=0 ){ sqlite3_snprintf(nHistory, zHistory,"%s/.sqlite_history", zHome); } } if( zHistory ){ shell_read_history(zHistory); } |
︙ | ︙ | |||
5578 5579 5580 5581 5582 5583 5584 5585 5586 5587 5588 5589 5590 | } set_table_name(&data, 0); if( data.db ){ session_close_all(&data); sqlite3_close(data.db); } sqlite3_free(data.zFreeOnClose); #if !SQLITE_SHELL_IS_UTF8 for(i=0; i<argc; i++) sqlite3_free(argv[i]); sqlite3_free(argv); #endif return rc; } | > | 5806 5807 5808 5809 5810 5811 5812 5813 5814 5815 5816 5817 5818 5819 | } set_table_name(&data, 0); if( data.db ){ session_close_all(&data); sqlite3_close(data.db); } sqlite3_free(data.zFreeOnClose); find_home_dir(1); #if !SQLITE_SHELL_IS_UTF8 for(i=0; i<argc; i++) sqlite3_free(argv[i]); sqlite3_free(argv); #endif return rc; } |
Changes to src/shun.c.
︙ | ︙ | |||
315 316 317 318 319 320 321 | login_needed(0); return; } style_header("Artifact Receipts"); if( showAll ){ ofst = 0; }else{ | | | > > > > > | 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 | login_needed(0); return; } style_header("Artifact Receipts"); if( showAll ){ ofst = 0; }else{ style_submenu_element("All", "rcvfromlist?all=1"); } if( ofst>0 ){ style_submenu_element("Newer", "rcvfromlist?ofst=%d", ofst>30 ? ofst-30 : 0); } db_multi_exec( "CREATE TEMP TABLE rcvidUsed(x INTEGER PRIMARY KEY);" "INSERT OR IGNORE INTO rcvidUsed(x) SELECT rcvid FROM blob;" ); if( db_table_exists("repository","unversioned") ){ db_multi_exec( "INSERT OR IGNORE INTO rcvidUsed(x) SELECT rcvid FROM unversioned;" ); } db_prepare(&q, "SELECT rcvid, login, datetime(rcvfrom.mtime), rcvfrom.ipaddr," " EXISTS(SELECT 1 FROM rcvidUsed WHERE x=rcvfrom.rcvid)" " FROM rcvfrom LEFT JOIN user USING(uid)" " ORDER BY rcvid DESC LIMIT %d OFFSET %d", showAll ? -1 : 31, ofst ); |
︙ | ︙ | |||
356 357 358 359 360 361 362 | cnt = 0; while( db_step(&q)==SQLITE_ROW ){ int rcvid = db_column_int(&q, 0); const char *zUser = db_column_text(&q, 1); const char *zDate = db_column_text(&q, 2); const char *zIpAddr = db_column_text(&q, 3); if( cnt==30 && !showAll ){ | | < | 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 | cnt = 0; while( db_step(&q)==SQLITE_ROW ){ int rcvid = db_column_int(&q, 0); const char *zUser = db_column_text(&q, 1); const char *zDate = db_column_text(&q, 2); const char *zIpAddr = db_column_text(&q, 3); if( cnt==30 && !showAll ){ style_submenu_element("Older", "rcvfromlist?ofst=%d", ofst+30); }else{ cnt++; @ <tr> if( db_column_int(&q,4) ){ @ <td style="padding-right: 15px;text-align: right;"> @ <a href="rcvfrom?rcvid=%d(rcvid)">%d(rcvid)</a></td> }else{ |
︙ | ︙ | |||
387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 | ** ** Show a single RCVFROM table entry identified by the rcvid= query ** parameters. Requires Admin privilege. */ void rcvfrom_page(void){ int rcvid = atoi(PD("rcvid","0")); Stmt q; login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); return; } style_header("Artifact Receipt %d", rcvid); if( db_exists( "SELECT 1 FROM blob WHERE rcvid=%d AND" " NOT EXISTS (SELECT 1 FROM shun WHERE shun.uuid=blob.uuid)", rcvid) ){ | > | < | < | 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 | ** ** Show a single RCVFROM table entry identified by the rcvid= query ** parameters. Requires Admin privilege. */ void rcvfrom_page(void){ int rcvid = atoi(PD("rcvid","0")); Stmt q; int cnt; login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); return; } style_header("Artifact Receipt %d", rcvid); if( db_exists( "SELECT 1 FROM blob WHERE rcvid=%d AND" " NOT EXISTS (SELECT 1 FROM shun WHERE shun.uuid=blob.uuid)", rcvid) ){ style_submenu_element("Shun All", "shun?shun&rcvid=%d#addshun", rcvid); } if( db_exists( "SELECT 1 FROM blob WHERE rcvid=%d AND" " EXISTS (SELECT 1 FROM shun WHERE shun.uuid=blob.uuid)", rcvid) ){ style_submenu_element("Unshun All", "shun?accept&rcvid=%d#delshun", rcvid); } db_prepare(&q, "SELECT login, datetime(rcvfrom.mtime), rcvfrom.ipaddr" " FROM rcvfrom LEFT JOIN user USING(uid)" " WHERE rcvid=%d", rcvid ); |
︙ | ︙ | |||
439 440 441 442 443 444 445 | ); describe_artifacts("IN toshow"); db_prepare(&q, "SELECT blob.rid, blob.uuid, blob.size, description.summary\n" " FROM blob LEFT JOIN description ON (blob.rid=description.rid)" " WHERE blob.rcvid=%d", rcvid ); | | < > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 | ); describe_artifacts("IN toshow"); db_prepare(&q, "SELECT blob.rid, blob.uuid, blob.size, description.summary\n" " FROM blob LEFT JOIN description ON (blob.rid=description.rid)" " WHERE blob.rcvid=%d", rcvid ); cnt = 0; while( db_step(&q)==SQLITE_ROW ){ const char *zUuid = db_column_text(&q, 1); int size = db_column_int(&q, 2); const char *zDesc = db_column_text(&q, 3); if( zDesc==0 ) zDesc = ""; if( cnt==0 ){ @ <tr><th valign="top" align="right">Artifacts:</th> @ <td valign="top"> } cnt++; @ <a href="%R/info/%s(zUuid)">%s(zUuid)</a> @ %h(zDesc) (size: %d(size))<br /> } if( cnt>0 ){ @ <p> if( db_exists( "SELECT 1 FROM blob WHERE rcvid=%d AND" " NOT EXISTS (SELECT 1 FROM shun WHERE shun.uuid=blob.uuid)", rcvid) ){ @ <form action='%R/shun'> @ <input type="hidden" name="shun"> @ <input type="hidden" name="rcvid" value='%d(rcvid)'> @ <input type="submit" value="Shun All These Artifacts"> @ </form> } if( db_exists( "SELECT 1 FROM blob WHERE rcvid=%d AND" " EXISTS (SELECT 1 FROM shun WHERE shun.uuid=blob.uuid)", rcvid) ){ @ <form action='%R/shun'> @ <input type="hidden" name="unshun"> @ <input type="hidden" name="rcvid" value='%d(rcvid)'> @ <input type="submit" value="Unshun All These Artifacts"> @ </form> } @ </td></tr> } if( db_table_exists("repository","unversioned") ){ cnt = 0; if( PB("uvdelete") && PB("confirmdelete") ){ db_multi_exec( "DELETE FROM unversioned WHERE rcvid=%d", rcvid ); } db_finalize(&q); db_prepare(&q, "SELECT name, hash, sz\n" " FROM unversioned " " WHERE rcvid=%d", rcvid ); while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q,0); const char *zHash = db_column_text(&q,1); int size = db_column_int(&q,2); int isDeleted = zHash==0; if( cnt==0 ){ @ <tr><th valign="top" align="right">Unversioned Files:</th> @ <td valign="top"> } cnt++; if( isDeleted ){ @ %h(zName) (deleted)<br /> }else{ @ <a href="%R/uv/%h(zName)">%h(zName)</a> (size: %d(size))<br /> } } if( cnt>0 ){ @ <p><form action='%R/rcvfrom'> @ <input type="hidden" name="rcvid" value='%d(rcvid)'> @ <input type="hidden" name="uvdelete" value="1"> if( PB("uvdelete") ){ @ <input type="hidden" name="confirmdelete" value="1"> @ <input type="submit" value="Confirm Deletion of These Files"> }else{ @ <input type="submit" value="Delete These Unversioned Files"> } @ </form> @ </td></tr> } } @ </table> db_finalize(&q); style_footer(); } |
Changes to src/sitemap.c.
︙ | ︙ | |||
99 100 101 102 103 104 105 106 107 108 109 110 111 112 | @ <li>%z(href("%R/tktsrch"))Ticket Search</a></li> } @ <li>%z(href("%R/timeline?y=t"))Recent activity</a></li> @ <li>%z(href("%R/attachlist"))List of Attachments</a></li> @ </ul> @ </li> } if( srchFlags ){ @ <li>%z(href("%R/search"))Full-Text Search</a></li> } @ <li>%z(href("%R/login"))Login/Logout/Change Password</a></li> if( g.perm.Read ){ @ <li>%z(href("%R/stat"))Repository Status</a> @ <ul> | > > > | 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 | @ <li>%z(href("%R/tktsrch"))Ticket Search</a></li> } @ <li>%z(href("%R/timeline?y=t"))Recent activity</a></li> @ <li>%z(href("%R/attachlist"))List of Attachments</a></li> @ </ul> @ </li> } if( g.perm.Read ){ @ <li>%z(href("%R/uvlist"))Unversioned Files</a> } if( srchFlags ){ @ <li>%z(href("%R/search"))Full-Text Search</a></li> } @ <li>%z(href("%R/login"))Login/Logout/Change Password</a></li> if( g.perm.Read ){ @ <li>%z(href("%R/stat"))Repository Status</a> @ <ul> |
︙ | ︙ |
Changes to src/skins.c.
︙ | ︙ | |||
98 99 100 101 102 103 104 | char *skin_use_alternative(const char *zName){ int i; Blob err = BLOB_INITIALIZER; if( strchr(zName, '/')!=0 ){ zAltSkinDir = fossil_strdup(zName); return 0; } | | | | 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 | char *skin_use_alternative(const char *zName){ int i; Blob err = BLOB_INITIALIZER; if( strchr(zName, '/')!=0 ){ zAltSkinDir = fossil_strdup(zName); return 0; } for(i=0; i<count(aBuiltinSkin); i++){ if( fossil_strcmp(aBuiltinSkin[i].zLabel, zName)==0 ){ pAltSkin = &aBuiltinSkin[i]; return 0; } } blob_appendf(&err, "available skins: %s", aBuiltinSkin[0].zLabel); for(i=1; i<count(aBuiltinSkin); i++){ blob_append(&err, " ", 1); blob_append(&err, aBuiltinSkin[i].zLabel, -1); } return blob_str(&err); } /* |
︙ | ︙ | |||
161 162 163 164 165 166 167 | } /* ** Return a pointer to a SkinDetail element. Return 0 if not found. */ static struct SkinDetail *skin_detail_find(const char *zName){ int lwr = 0; | | | 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 | } /* ** Return a pointer to a SkinDetail element. Return 0 if not found. */ static struct SkinDetail *skin_detail_find(const char *zName){ int lwr = 0; int upr = count(aSkinDetail); while( upr>=lwr ){ int mid = (upr+lwr)/2; int c = fossil_strcmp(aSkinDetail[mid].zName, zName); if( c==0 ) return &aSkinDetail[mid]; if( c<0 ){ lwr = mid+1; }else{ |
︙ | ︙ | |||
281 282 283 284 285 286 287 | /* ** Return true if there exists a skin name "zSkinName". */ static int skinExists(const char *zSkinName){ int i; if( zSkinName==0 ) return 0; | | | | 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 | /* ** Return true if there exists a skin name "zSkinName". */ static int skinExists(const char *zSkinName){ int i; if( zSkinName==0 ) return 0; for(i=0; i<count(aBuiltinSkin); i++){ if( fossil_strcmp(zSkinName, aBuiltinSkin[i].zDesc)==0 ) return 1; } return db_exists("SELECT 1 FROM config WHERE name='skin:%q'", zSkinName); } /* ** Construct and return an string of SQL statements that represents ** a "skin" setting. If zName==0 then return the skin currently ** installed. Otherwise, return one of the built-in skins designated ** by zName. ** ** Memory to hold the returned string is obtained from malloc. */ static char *getSkin(const char *zName){ const char *z; char *zLabel; static const char *azType[] = { "css", "header", "footer", "details" }; int i; Blob val; blob_zero(&val); for(i=0; i<count(azType); i++){ if( zName ){ zLabel = mprintf("skins/%s/%s.txt", zName, azType[i]); z = builtin_text(zLabel); fossil_free(zLabel); }else{ z = db_get(azType[i], 0); if( z==0 ){ |
︙ | ︙ | |||
425 426 427 428 429 430 431 | login_check_credentials(); if( !g.perm.Setup ){ login_needed(0); return; } db_begin_transaction(); zCurrent = getSkin(0); | | | 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 | login_check_credentials(); if( !g.perm.Setup ){ login_needed(0); return; } db_begin_transaction(); zCurrent = getSkin(0); for(i=0; i<count(aBuiltinSkin); i++){ aBuiltinSkin[i].zSQL = getSkin(aBuiltinSkin[i].zLabel); } /* Process requests to delete a user-defined skin */ if( P("del1") && (zName = skinVarName(P("sn"), 1))!=0 ){ style_header("Confirm Custom Skin Delete"); @ <form action="%s(g.zTop)/setup_skin" method="post"><div> |
︙ | ︙ | |||
456 457 458 459 460 461 462 | /* The user pressed one of the "Install" buttons. */ if( P("load") && (z = P("sn"))!=0 && z[0] ){ int seen = 0; /* Check to see if the current skin is already saved. If it is, there ** is no need to create a backup */ zCurrent = getSkin(0); | | | | | | 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 | /* The user pressed one of the "Install" buttons. */ if( P("load") && (z = P("sn"))!=0 && z[0] ){ int seen = 0; /* Check to see if the current skin is already saved. If it is, there ** is no need to create a backup */ zCurrent = getSkin(0); for(i=0; i<count(aBuiltinSkin); i++){ if( fossil_strcmp(aBuiltinSkin[i].zSQL, zCurrent)==0 ){ seen = 1; break; } } if( !seen ){ seen = db_exists("SELECT 1 FROM config WHERE name GLOB 'skin:*'" " AND value=%Q", zCurrent); if( !seen ){ db_multi_exec( "INSERT INTO config(name,value,mtime) VALUES(" " strftime('skin:Backup On %%Y-%%m-%%d %%H:%%M:%%S')," " %Q,now())", zCurrent ); } } seen = 0; for(i=0; i<count(aBuiltinSkin); i++){ if( fossil_strcmp(aBuiltinSkin[i].zDesc, z)==0 ){ seen = 1; zCurrent = aBuiltinSkin[i].zSQL; db_multi_exec("%s", zCurrent/*safe-for-%s*/); break; } } if( !seen ){ zName = skinVarName(z,0); zCurrent = db_get(zName, 0); db_multi_exec("%s", zCurrent/*safe-for-%s*/); } } style_header("Skins"); if( zErr ){ @ <p style="color:red">%h(zErr)</p> } @ <p>A "skin" is a combination of @ <a href="setup_skinedit?w=0">CSS</a>, @ <a href="setup_skinedit?w=2">Header</a>, @ <a href="setup_skinedit?w=1">Footer</a>, and @ <a href="setup_skinedit?w=3">Details</a> @ that determines the look and feel @ of the web interface.</p> @ if( pAltSkin ){ @ <p class="generalError"> @ This page is generated using an skin override named @ "%h(pAltSkin->zLabel)". You can change the skin configuration @ below, but the changes will not take effect until the Fossil server @ is restarted without the override.</p> @ } @ <h2>Available Skins:</h2> @ <table border="0"> for(i=0; i<count(aBuiltinSkin); i++){ z = aBuiltinSkin[i].zDesc; @ <tr><td>%d(i+1).<td>%h(z)<td> <td> if( fossil_strcmp(aBuiltinSkin[i].zSQL, zCurrent)==0 ){ @ (Currently In Use) seenCurrent = 1; }else{ @ <form action="%s(g.zTop)/setup_skin" method="post"> |
︙ | ︙ | |||
595 596 597 598 599 600 601 | login_check_credentials(); if( !g.perm.Setup ){ login_needed(0); return; } ii = atoi(PD("w","0")); | | | | | | | 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 | login_check_credentials(); if( !g.perm.Setup ){ login_needed(0); return; } ii = atoi(PD("w","0")); if( ii<0 || ii>count(aSkinAttr) ) ii = 0; zBasis = PD("basis","default"); zDflt = mprintf("skins/%s/%s.txt", zBasis, aSkinAttr[ii].zFile); db_begin_transaction(); if( P("revert")!=0 ){ db_multi_exec("DELETE FROM config WHERE name=%Q", aSkinAttr[ii].zFile); cgi_replace_parameter(aSkinAttr[ii].zFile, builtin_text(zDflt)); } style_header("%s", aSkinAttr[ii].zTitle); for(j=0; j<count(aSkinAttr); j++){ if( j==ii ) continue; style_submenu_element(aSkinAttr[j].zSubmenu, "%R/setup_skinedit?w=%d&basis=%h",j,zBasis); } style_submenu_element("Skins", "%R/setup_skin"); @ <form action="%s(g.zTop)/setup_skinedit" method="post"><div> login_insert_csrf_secret(); @ <input type='hidden' name='w' value='%d(ii)'> @ <h2>Edit %s(aSkinAttr[ii].zTitle):</h2> zContent = textarea_attribute("", 10, 80, aSkinAttr[ii].zFile, aSkinAttr[ii].zFile, builtin_text(zDflt), 0); @ <br /> @ <input type="submit" name="submit" value="Apply Changes" /> @ <hr /> @ Baseline: <select size='1' name='basis'> for(j=0; j<count(aBuiltinSkin); j++){ cgi_printf("<option value='%h'%s>%h</option>\n", aBuiltinSkin[j].zLabel, fossil_strcmp(zBasis,aBuiltinSkin[j].zLabel)==0 ? " selected" : "", aBuiltinSkin[j].zDesc ); } @ </select> |
︙ | ︙ |
Changes to src/sqlcmd.c.
︙ | ︙ | |||
140 141 142 143 144 145 146 | const char **pzErrMsg, const void *notUsed ){ add_content_sql_commands(db); db_add_aux_functions(db); re_add_sql_func(db); search_sql_setup(db); | < | 140 141 142 143 144 145 146 147 148 149 150 151 152 153 | const char **pzErrMsg, const void *notUsed ){ add_content_sql_commands(db); db_add_aux_functions(db); re_add_sql_func(db); search_sql_setup(db); foci_register(db); g.repositoryOpen = 1; g.db = db; return SQLITE_OK; } /* |
︙ | ︙ | |||
163 164 165 166 167 168 169 | ** ** Fossil Options: ** ** --no-repository Skip opening the repository database. ** ** WARNING: Careless use of this command can corrupt a Fossil repository ** in ways that are unrecoverable. Be sure you know what you are doing before | | | 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 | ** ** Fossil Options: ** ** --no-repository Skip opening the repository database. ** ** WARNING: Careless use of this command can corrupt a Fossil repository ** in ways that are unrecoverable. Be sure you know what you are doing before ** running any SQL commands that modify the repository database. ** ** The following extensions to the usual SQLite commands are provided: ** ** content(X) Return the content of artifact X. X can be a ** SHA1 hash or prefix or a tag. ** ** compress(X) Compress text X. |
︙ | ︙ | |||
186 187 188 189 190 191 192 | ** name X. ** ** now() Return the number of seconds since 1970. ** ** REGEXP The REGEXP operator works, unlike in ** standard SQLite. ** | | | < | < < < | 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 | ** name X. ** ** now() Return the number of seconds since 1970. ** ** REGEXP The REGEXP operator works, unlike in ** standard SQLite. ** ** files_of_checkin(X) A table-valued function that returns info on ** all files contained in check-in X. Example: ** SELECT * FROM files_of_checkin('trunk'); */ void cmd_sqlite3(void){ int noRepository; extern int sqlite3_shell(int, char**); noRepository = find_option("no-repository", 0, 0)!=0; if( !noRepository ){ db_find_and_open_repository(OPEN_ANY_SCHEMA, 0); |
︙ | ︙ | |||
225 226 227 228 229 230 231 | ** This routine closes the Fossil databases and/or invalidates the global ** state variables that keep track of them. */ void fossil_close(int bDb, int noRepository){ if( bDb ) db_close(1); if( noRepository ) g.zRepositoryName = 0; g.db = 0; | < | 220 221 222 223 224 225 226 227 228 229 | ** This routine closes the Fossil databases and/or invalidates the global ** state variables that keep track of them. */ void fossil_close(int bDb, int noRepository){ if( bDb ) db_close(1); if( noRepository ) g.zRepositoryName = 0; g.db = 0; g.repositoryOpen = 0; g.localOpen = 0; } |
Changes to src/sqlite3.c.
more than 10,000 changes
Changes to src/sqlite3.h.
︙ | ︙ | |||
26 27 28 29 30 31 32 | ** on how SQLite interfaces are supposed to operate. ** ** The name of this file under configuration management is "sqlite.h.in". ** The makefile makes some minor changes to this file (such as inserting ** the version number) and changes its name to "sqlite3.h" as ** part of the build process. */ | | | | 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 | ** on how SQLite interfaces are supposed to operate. ** ** The name of this file under configuration management is "sqlite.h.in". ** The makefile makes some minor changes to this file (such as inserting ** the version number) and changes its name to "sqlite3.h" as ** part of the build process. */ #ifndef SQLITE3_H #define SQLITE3_H #include <stdarg.h> /* Needed for the definition of va_list */ /* ** Make sure we can call this stuff from C++. */ #ifdef __cplusplus extern "C" { |
︙ | ︙ | |||
50 51 52 53 54 55 56 57 | #endif #ifndef SQLITE_API # define SQLITE_API #endif #ifndef SQLITE_CDECL # define SQLITE_CDECL #endif #ifndef SQLITE_STDCALL | > > > | > > > > > > | 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 | #endif #ifndef SQLITE_API # define SQLITE_API #endif #ifndef SQLITE_CDECL # define SQLITE_CDECL #endif #ifndef SQLITE_APICALL # define SQLITE_APICALL #endif #ifndef SQLITE_STDCALL # define SQLITE_STDCALL SQLITE_APICALL #endif #ifndef SQLITE_CALLBACK # define SQLITE_CALLBACK #endif #ifndef SQLITE_SYSAPI # define SQLITE_SYSAPI #endif /* ** These no-op macros are used in front of interfaces to mark those ** interfaces as either deprecated or experimental. New applications ** should not use deprecated interfaces - they are supported for backwards ** compatibility only. Application writers should be aware that |
︙ | ︙ | |||
95 96 97 98 99 100 101 | ** with the value (X*1000000 + Y*1000 + Z) where X, Y, and Z are the same ** numbers used in [SQLITE_VERSION].)^ ** The SQLITE_VERSION_NUMBER for any given release of SQLite will also ** be larger than the release from which it is derived. Either Y will ** be held constant and Z will be incremented or else Y will be incremented ** and Z will be reset to zero. ** | > | | | | | 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 | ** with the value (X*1000000 + Y*1000 + Z) where X, Y, and Z are the same ** numbers used in [SQLITE_VERSION].)^ ** The SQLITE_VERSION_NUMBER for any given release of SQLite will also ** be larger than the release from which it is derived. Either Y will ** be held constant and Z will be incremented or else Y will be incremented ** and Z will be reset to zero. ** ** Since [version 3.6.18] ([dateof:3.6.18]), ** SQLite source code has been stored in the ** <a href="http://www.fossil-scm.org/">Fossil configuration management ** system</a>. ^The SQLITE_SOURCE_ID macro evaluates to ** a string which identifies a particular check-in of SQLite ** within its configuration management system. ^The SQLITE_SOURCE_ID ** string contains the date and time of the check-in (UTC) and an SHA1 ** hash of the entire source tree. ** ** See also: [sqlite3_libversion()], ** [sqlite3_libversion_number()], [sqlite3_sourceid()], ** [sqlite_version()] and [sqlite_source_id()]. */ #define SQLITE_VERSION "3.16.0" #define SQLITE_VERSION_NUMBER 3016000 #define SQLITE_SOURCE_ID "2016-11-02 14:50:19 3028845329c9b7acdec2ec8b01d00d782347454c" /* ** CAPI3REF: Run-Time Library Version Numbers ** KEYWORDS: sqlite3_version, sqlite3_sourceid ** ** These interfaces provide the same information as the [SQLITE_VERSION], ** [SQLITE_VERSION_NUMBER], and [SQLITE_SOURCE_ID] C preprocessor macros |
︙ | ︙ | |||
142 143 144 145 146 147 148 | ** [SQLITE_VERSION_NUMBER]. ^The sqlite3_sourceid() function returns ** a pointer to a string constant whose value is the same as the ** [SQLITE_SOURCE_ID] C preprocessor macro. ** ** See also: [sqlite_version()] and [sqlite_source_id()]. */ SQLITE_API SQLITE_EXTERN const char sqlite3_version[]; | | | | | 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 | ** [SQLITE_VERSION_NUMBER]. ^The sqlite3_sourceid() function returns ** a pointer to a string constant whose value is the same as the ** [SQLITE_SOURCE_ID] C preprocessor macro. ** ** See also: [sqlite_version()] and [sqlite_source_id()]. */ SQLITE_API SQLITE_EXTERN const char sqlite3_version[]; SQLITE_API const char *sqlite3_libversion(void); SQLITE_API const char *sqlite3_sourceid(void); SQLITE_API int sqlite3_libversion_number(void); /* ** CAPI3REF: Run-Time Library Compilation Options Diagnostics ** ** ^The sqlite3_compileoption_used() function returns 0 or 1 ** indicating whether the specified option was defined at ** compile time. ^The SQLITE_ prefix may be omitted from the |
︙ | ︙ | |||
169 170 171 172 173 174 175 | ** and sqlite3_compileoption_get() may be omitted by specifying the ** [SQLITE_OMIT_COMPILEOPTION_DIAGS] option at compile time. ** ** See also: SQL functions [sqlite_compileoption_used()] and ** [sqlite_compileoption_get()] and the [compile_options pragma]. */ #ifndef SQLITE_OMIT_COMPILEOPTION_DIAGS | | | | 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 | ** and sqlite3_compileoption_get() may be omitted by specifying the ** [SQLITE_OMIT_COMPILEOPTION_DIAGS] option at compile time. ** ** See also: SQL functions [sqlite_compileoption_used()] and ** [sqlite_compileoption_get()] and the [compile_options pragma]. */ #ifndef SQLITE_OMIT_COMPILEOPTION_DIAGS SQLITE_API int sqlite3_compileoption_used(const char *zOptName); SQLITE_API const char *sqlite3_compileoption_get(int N); #endif /* ** CAPI3REF: Test To See If The Library Is Threadsafe ** ** ^The sqlite3_threadsafe() function returns zero if and only if ** SQLite was compiled with mutexing code omitted due to the |
︙ | ︙ | |||
209 210 211 212 213 214 215 | ** sqlite3_threadsafe() function shows only the compile-time setting of ** thread safety, not any run-time changes to that setting made by ** sqlite3_config(). In other words, the return value from sqlite3_threadsafe() ** is unchanged by calls to sqlite3_config().)^ ** ** See the [threading mode] documentation for additional information. */ | | | 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 | ** sqlite3_threadsafe() function shows only the compile-time setting of ** thread safety, not any run-time changes to that setting made by ** sqlite3_config(). In other words, the return value from sqlite3_threadsafe() ** is unchanged by calls to sqlite3_config().)^ ** ** See the [threading mode] documentation for additional information. */ SQLITE_API int sqlite3_threadsafe(void); /* ** CAPI3REF: Database Connection Handle ** KEYWORDS: {database connection} {database connections} ** ** Each open SQLite database is represented by a pointer to an instance of ** the opaque structure named "sqlite3". It is useful to think of an sqlite3 |
︙ | ︙ | |||
306 307 308 309 310 311 312 | ** must be either a NULL ** pointer or an [sqlite3] object pointer obtained ** from [sqlite3_open()], [sqlite3_open16()], or ** [sqlite3_open_v2()], and not previously closed. ** ^Calling sqlite3_close() or sqlite3_close_v2() with a NULL pointer ** argument is a harmless no-op. */ | | | | 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 | ** must be either a NULL ** pointer or an [sqlite3] object pointer obtained ** from [sqlite3_open()], [sqlite3_open16()], or ** [sqlite3_open_v2()], and not previously closed. ** ^Calling sqlite3_close() or sqlite3_close_v2() with a NULL pointer ** argument is a harmless no-op. */ SQLITE_API int sqlite3_close(sqlite3*); SQLITE_API int sqlite3_close_v2(sqlite3*); /* ** The type for a callback function. ** This is legacy and deprecated. It is included for historical ** compatibility and is not documented. */ typedef int (*sqlite3_callback)(void*,int,char**, char**); |
︙ | ︙ | |||
378 379 380 381 382 383 384 | ** is a valid and open [database connection]. ** <li> The application must not close the [database connection] specified by ** the 1st parameter to sqlite3_exec() while sqlite3_exec() is running. ** <li> The application must not modify the SQL statement text passed into ** the 2nd parameter of sqlite3_exec() while sqlite3_exec() is running. ** </ul> */ | | | 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 | ** is a valid and open [database connection]. ** <li> The application must not close the [database connection] specified by ** the 1st parameter to sqlite3_exec() while sqlite3_exec() is running. ** <li> The application must not modify the SQL statement text passed into ** the 2nd parameter of sqlite3_exec() while sqlite3_exec() is running. ** </ul> */ SQLITE_API int sqlite3_exec( sqlite3*, /* An open database */ const char *sql, /* SQL to be evaluated */ int (*callback)(void*,int,char**,char**), /* Callback function */ void *, /* 1st argument to callback */ char **errmsg /* Error msg written here */ ); |
︙ | ︙ | |||
439 440 441 442 443 444 445 | ** CAPI3REF: Extended Result Codes ** KEYWORDS: {extended result code definitions} ** ** In its default configuration, SQLite API routines return one of 30 integer ** [result codes]. However, experience has shown that many of ** these result codes are too coarse-grained. They do not provide as ** much information about problems as programmers might like. In an effort to | | > | 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 | ** CAPI3REF: Extended Result Codes ** KEYWORDS: {extended result code definitions} ** ** In its default configuration, SQLite API routines return one of 30 integer ** [result codes]. However, experience has shown that many of ** these result codes are too coarse-grained. They do not provide as ** much information about problems as programmers might like. In an effort to ** address this, newer versions of SQLite (version 3.3.8 [dateof:3.3.8] ** and later) include ** support for additional result codes that provide more detailed information ** about errors. These [extended result codes] are enabled or disabled ** on a per database connection basis using the ** [sqlite3_extended_result_codes()] API. Or, the extended code for ** the most recent error can be obtained using ** [sqlite3_extended_errcode()]. */ |
︙ | ︙ | |||
502 503 504 505 506 507 508 509 510 511 512 513 514 515 | #define SQLITE_CONSTRAINT_UNIQUE (SQLITE_CONSTRAINT | (8<<8)) #define SQLITE_CONSTRAINT_VTAB (SQLITE_CONSTRAINT | (9<<8)) #define SQLITE_CONSTRAINT_ROWID (SQLITE_CONSTRAINT |(10<<8)) #define SQLITE_NOTICE_RECOVER_WAL (SQLITE_NOTICE | (1<<8)) #define SQLITE_NOTICE_RECOVER_ROLLBACK (SQLITE_NOTICE | (2<<8)) #define SQLITE_WARNING_AUTOINDEX (SQLITE_WARNING | (1<<8)) #define SQLITE_AUTH_USER (SQLITE_AUTH | (1<<8)) /* ** CAPI3REF: Flags For File Open Operations ** ** These bit values are intended for use in the ** 3rd parameter to the [sqlite3_open_v2()] interface and ** in the 4th parameter to the [sqlite3_vfs.xOpen] method. | > | 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 | #define SQLITE_CONSTRAINT_UNIQUE (SQLITE_CONSTRAINT | (8<<8)) #define SQLITE_CONSTRAINT_VTAB (SQLITE_CONSTRAINT | (9<<8)) #define SQLITE_CONSTRAINT_ROWID (SQLITE_CONSTRAINT |(10<<8)) #define SQLITE_NOTICE_RECOVER_WAL (SQLITE_NOTICE | (1<<8)) #define SQLITE_NOTICE_RECOVER_ROLLBACK (SQLITE_NOTICE | (2<<8)) #define SQLITE_WARNING_AUTOINDEX (SQLITE_WARNING | (1<<8)) #define SQLITE_AUTH_USER (SQLITE_AUTH | (1<<8)) #define SQLITE_OK_LOAD_PERMANENTLY (SQLITE_OK | (1<<8)) /* ** CAPI3REF: Flags For File Open Operations ** ** These bit values are intended for use in the ** 3rd parameter to the [sqlite3_open_v2()] interface and ** in the 4th parameter to the [sqlite3_vfs.xOpen] method. |
︙ | ︙ | |||
961 962 963 964 965 966 967 968 969 970 971 972 973 974 | ** the [SQLITE_USE_FCNTL_TRACE] compile-time option is enabled. ** ** <li>[[SQLITE_FCNTL_HAS_MOVED]] ** The [SQLITE_FCNTL_HAS_MOVED] file control interprets its argument as a ** pointer to an integer and it writes a boolean into that integer depending ** on whether or not the file has been renamed, moved, or deleted since it ** was first opened. ** ** <li>[[SQLITE_FCNTL_WIN32_SET_HANDLE]] ** The [SQLITE_FCNTL_WIN32_SET_HANDLE] opcode is used for debugging. This ** opcode causes the xFileControl method to swap the file handle with the one ** pointed to by the pArg argument. This capability is used during testing ** and only needs to be supported when SQLITE_TEST is defined. ** | > > > > > > | 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 | ** the [SQLITE_USE_FCNTL_TRACE] compile-time option is enabled. ** ** <li>[[SQLITE_FCNTL_HAS_MOVED]] ** The [SQLITE_FCNTL_HAS_MOVED] file control interprets its argument as a ** pointer to an integer and it writes a boolean into that integer depending ** on whether or not the file has been renamed, moved, or deleted since it ** was first opened. ** ** <li>[[SQLITE_FCNTL_WIN32_GET_HANDLE]] ** The [SQLITE_FCNTL_WIN32_GET_HANDLE] opcode can be used to obtain the ** underlying native file handle associated with a file handle. This file ** control interprets its argument as a pointer to a native file handle and ** writes the resulting value there. ** ** <li>[[SQLITE_FCNTL_WIN32_SET_HANDLE]] ** The [SQLITE_FCNTL_WIN32_SET_HANDLE] opcode is used for debugging. This ** opcode causes the xFileControl method to swap the file handle with the one ** pointed to by the pArg argument. This capability is used during testing ** and only needs to be supported when SQLITE_TEST is defined. ** |
︙ | ︙ | |||
1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 | #define SQLITE_FCNTL_COMMIT_PHASETWO 22 #define SQLITE_FCNTL_WIN32_SET_HANDLE 23 #define SQLITE_FCNTL_WAL_BLOCK 24 #define SQLITE_FCNTL_ZIPVFS 25 #define SQLITE_FCNTL_RBU 26 #define SQLITE_FCNTL_VFS_POINTER 27 #define SQLITE_FCNTL_JOURNAL_POINTER 28 /* deprecated names */ #define SQLITE_GET_LOCKPROXYFILE SQLITE_FCNTL_GET_LOCKPROXYFILE #define SQLITE_SET_LOCKPROXYFILE SQLITE_FCNTL_SET_LOCKPROXYFILE #define SQLITE_LAST_ERRNO SQLITE_FCNTL_LAST_ERRNO /* ** CAPI3REF: Mutex Handle ** ** The mutex module within SQLite defines [sqlite3_mutex] to be an ** abstract type for a mutex object. The SQLite core never looks ** at the internal representation of an [sqlite3_mutex]. It only ** deals with pointers to the [sqlite3_mutex] object. ** ** Mutexes are created using [sqlite3_mutex_alloc()]. */ typedef struct sqlite3_mutex sqlite3_mutex; /* ** CAPI3REF: OS Interface Object ** ** An instance of the sqlite3_vfs object defines the interface between ** the SQLite core and the underlying operating system. The "vfs" ** in the name of the object stands for "virtual file system". See ** the [VFS | VFS documentation] for further information. | > > > > > > > > > > > > | 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 | #define SQLITE_FCNTL_COMMIT_PHASETWO 22 #define SQLITE_FCNTL_WIN32_SET_HANDLE 23 #define SQLITE_FCNTL_WAL_BLOCK 24 #define SQLITE_FCNTL_ZIPVFS 25 #define SQLITE_FCNTL_RBU 26 #define SQLITE_FCNTL_VFS_POINTER 27 #define SQLITE_FCNTL_JOURNAL_POINTER 28 #define SQLITE_FCNTL_WIN32_GET_HANDLE 29 #define SQLITE_FCNTL_PDB 30 /* deprecated names */ #define SQLITE_GET_LOCKPROXYFILE SQLITE_FCNTL_GET_LOCKPROXYFILE #define SQLITE_SET_LOCKPROXYFILE SQLITE_FCNTL_SET_LOCKPROXYFILE #define SQLITE_LAST_ERRNO SQLITE_FCNTL_LAST_ERRNO /* ** CAPI3REF: Mutex Handle ** ** The mutex module within SQLite defines [sqlite3_mutex] to be an ** abstract type for a mutex object. The SQLite core never looks ** at the internal representation of an [sqlite3_mutex]. It only ** deals with pointers to the [sqlite3_mutex] object. ** ** Mutexes are created using [sqlite3_mutex_alloc()]. */ typedef struct sqlite3_mutex sqlite3_mutex; /* ** CAPI3REF: Loadable Extension Thunk ** ** A pointer to the opaque sqlite3_api_routines structure is passed as ** the third parameter to entry points of [loadable extensions]. This ** structure must be typedefed in order to work around compiler warnings ** on some platforms. */ typedef struct sqlite3_api_routines sqlite3_api_routines; /* ** CAPI3REF: OS Interface Object ** ** An instance of the sqlite3_vfs object defines the interface between ** the SQLite core and the underlying operating system. The "vfs" ** in the name of the object stands for "virtual file system". See ** the [VFS | VFS documentation] for further information. |
︙ | ︙ | |||
1366 1367 1368 1369 1370 1371 1372 | ** (using the [SQLITE_OS_OTHER=1] compile-time ** option) the application must supply a suitable implementation for ** sqlite3_os_init() and sqlite3_os_end(). An application-supplied ** implementation of sqlite3_os_init() or sqlite3_os_end() ** must return [SQLITE_OK] on success and some other [error code] upon ** failure. */ | | | | | | 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 | ** (using the [SQLITE_OS_OTHER=1] compile-time ** option) the application must supply a suitable implementation for ** sqlite3_os_init() and sqlite3_os_end(). An application-supplied ** implementation of sqlite3_os_init() or sqlite3_os_end() ** must return [SQLITE_OK] on success and some other [error code] upon ** failure. */ SQLITE_API int sqlite3_initialize(void); SQLITE_API int sqlite3_shutdown(void); SQLITE_API int sqlite3_os_init(void); SQLITE_API int sqlite3_os_end(void); /* ** CAPI3REF: Configuring The SQLite Library ** ** The sqlite3_config() interface is used to make global configuration ** changes to SQLite in order to tune SQLite to the specific needs of ** the application. The default configuration is recommended for most |
︙ | ︙ | |||
1402 1403 1404 1405 1406 1407 1408 | ** vary depending on the [configuration option] ** in the first argument. ** ** ^When a configuration option is set, sqlite3_config() returns [SQLITE_OK]. ** ^If the option is unknown or SQLite is unable to set the option ** then this routine returns a non-zero [error code]. */ | | | | 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 | ** vary depending on the [configuration option] ** in the first argument. ** ** ^When a configuration option is set, sqlite3_config() returns [SQLITE_OK]. ** ^If the option is unknown or SQLite is unable to set the option ** then this routine returns a non-zero [error code]. */ SQLITE_API int sqlite3_config(int, ...); /* ** CAPI3REF: Configure database connections ** METHOD: sqlite3 ** ** The sqlite3_db_config() interface is used to make configuration ** changes to a [database connection]. The interface is similar to ** [sqlite3_config()] except that the changes apply to a single ** [database connection] (specified in the first argument). ** ** The second argument to sqlite3_db_config(D,V,...) is the ** [SQLITE_DBCONFIG_LOOKASIDE | configuration verb] - an integer code ** that indicates what aspect of the [database connection] is being configured. ** Subsequent arguments vary depending on the configuration verb. ** ** ^Calls to sqlite3_db_config() return SQLITE_OK if and only if ** the call is considered successful. */ SQLITE_API int sqlite3_db_config(sqlite3*, int op, ...); /* ** CAPI3REF: Memory Allocation Routines ** ** An instance of this object defines the interface between SQLite ** and low-level memory allocation routines. ** |
︙ | ︙ | |||
1935 1936 1937 1938 1939 1940 1941 | ** <dt>SQLITE_DBCONFIG_ENABLE_LOAD_EXTENSION</dt> ** <dd> ^This option is used to enable or disable the [sqlite3_load_extension()] ** interface independently of the [load_extension()] SQL function. ** The [sqlite3_enable_load_extension()] API enables or disables both the ** C-API [sqlite3_load_extension()] and the SQL function [load_extension()]. ** There should be two additional arguments. ** When the first argument to this interface is 1, then only the C-API is | | > > > > > > > > > > > > > > > > > > > > > > > | | 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 | ** <dt>SQLITE_DBCONFIG_ENABLE_LOAD_EXTENSION</dt> ** <dd> ^This option is used to enable or disable the [sqlite3_load_extension()] ** interface independently of the [load_extension()] SQL function. ** The [sqlite3_enable_load_extension()] API enables or disables both the ** C-API [sqlite3_load_extension()] and the SQL function [load_extension()]. ** There should be two additional arguments. ** When the first argument to this interface is 1, then only the C-API is ** enabled and the SQL function remains disabled. If the first argument to ** this interface is 0, then both the C-API and the SQL function are disabled. ** If the first argument is -1, then no changes are made to state of either the ** C-API or the SQL function. ** The second parameter is a pointer to an integer into which ** is written 0 or 1 to indicate whether [sqlite3_load_extension()] interface ** is disabled or enabled following this call. The second parameter may ** be a NULL pointer, in which case the new setting is not reported back. ** </dd> ** ** <dt>SQLITE_DBCONFIG_MAINDBNAME</dt> ** <dd> ^This option is used to change the name of the "main" database ** schema. ^The sole argument is a pointer to a constant UTF8 string ** which will become the new schema name in place of "main". ^SQLite ** does not make a copy of the new main schema name string, so the application ** must ensure that the argument passed into this DBCONFIG option is unchanged ** until after the database connection closes. ** </dd> ** ** <dt>SQLITE_DBCONFIG_NO_CKPT_ON_CLOSE</dt> ** <dd> Usually, when a database in wal mode is closed or detached from a ** database handle, SQLite checks if this will mean that there are now no ** connections at all to the database. If so, it performs a checkpoint ** operation before closing the connection. This option may be used to ** override this behaviour. The first parameter passed to this operation ** is an integer - non-zero to disable checkpoints-on-close, or zero (the ** default) to enable them. The second parameter is a pointer to an integer ** into which is written 0 or 1 to indicate whether checkpoints-on-close ** have been disabled - 0 if they are not disabled, 1 if they are. ** </dd> ** ** </dl> */ #define SQLITE_DBCONFIG_MAINDBNAME 1000 /* const char* */ #define SQLITE_DBCONFIG_LOOKASIDE 1001 /* void* int int */ #define SQLITE_DBCONFIG_ENABLE_FKEY 1002 /* int int* */ #define SQLITE_DBCONFIG_ENABLE_TRIGGER 1003 /* int int* */ #define SQLITE_DBCONFIG_ENABLE_FTS3_TOKENIZER 1004 /* int int* */ #define SQLITE_DBCONFIG_ENABLE_LOAD_EXTENSION 1005 /* int int* */ #define SQLITE_DBCONFIG_NO_CKPT_ON_CLOSE 1006 /* int int* */ /* ** CAPI3REF: Enable Or Disable Extended Result Codes ** METHOD: sqlite3 ** ** ^The sqlite3_extended_result_codes() routine enables or disables the ** [extended result codes] feature of SQLite. ^The extended result ** codes are disabled by default for historical compatibility. */ SQLITE_API int sqlite3_extended_result_codes(sqlite3*, int onoff); /* ** CAPI3REF: Last Insert Rowid ** METHOD: sqlite3 ** ** ^Each entry in most SQLite tables (except for [WITHOUT ROWID] tables) ** has a unique 64-bit signed |
︙ | ︙ | |||
2014 2015 2016 2017 2018 2019 2020 | ** If a separate thread performs a new [INSERT] on the same ** database connection while the [sqlite3_last_insert_rowid()] ** function is running and thus changes the last insert [rowid], ** then the value returned by [sqlite3_last_insert_rowid()] is ** unpredictable and might not equal either the old or the new ** last insert [rowid]. */ | | | 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 | ** If a separate thread performs a new [INSERT] on the same ** database connection while the [sqlite3_last_insert_rowid()] ** function is running and thus changes the last insert [rowid], ** then the value returned by [sqlite3_last_insert_rowid()] is ** unpredictable and might not equal either the old or the new ** last insert [rowid]. */ SQLITE_API sqlite3_int64 sqlite3_last_insert_rowid(sqlite3*); /* ** CAPI3REF: Count The Number Of Rows Modified ** METHOD: sqlite3 ** ** ^This function returns the number of rows modified, inserted or ** deleted by the most recently completed INSERT, UPDATE or DELETE |
︙ | ︙ | |||
2067 2068 2069 2070 2071 2072 2073 | ** See also the [sqlite3_total_changes()] interface, the ** [count_changes pragma], and the [changes() SQL function]. ** ** If a separate thread makes changes on the same database connection ** while [sqlite3_changes()] is running then the value returned ** is unpredictable and not meaningful. */ | | | 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 | ** See also the [sqlite3_total_changes()] interface, the ** [count_changes pragma], and the [changes() SQL function]. ** ** If a separate thread makes changes on the same database connection ** while [sqlite3_changes()] is running then the value returned ** is unpredictable and not meaningful. */ SQLITE_API int sqlite3_changes(sqlite3*); /* ** CAPI3REF: Total Number Of Rows Modified ** METHOD: sqlite3 ** ** ^This function returns the total number of rows inserted, modified or ** deleted by all [INSERT], [UPDATE] or [DELETE] statements completed |
︙ | ︙ | |||
2091 2092 2093 2094 2095 2096 2097 | ** See also the [sqlite3_changes()] interface, the ** [count_changes pragma], and the [total_changes() SQL function]. ** ** If a separate thread makes changes on the same database connection ** while [sqlite3_total_changes()] is running then the value ** returned is unpredictable and not meaningful. */ | | | 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 | ** See also the [sqlite3_changes()] interface, the ** [count_changes pragma], and the [total_changes() SQL function]. ** ** If a separate thread makes changes on the same database connection ** while [sqlite3_total_changes()] is running then the value ** returned is unpredictable and not meaningful. */ SQLITE_API int sqlite3_total_changes(sqlite3*); /* ** CAPI3REF: Interrupt A Long-Running Query ** METHOD: sqlite3 ** ** ^This function causes any pending database operation to abort and ** return at its earliest opportunity. This routine is typically |
︙ | ︙ | |||
2131 2132 2133 2134 2135 2136 2137 | ** ^A call to sqlite3_interrupt(D) that occurs when there are no running ** SQL statements is a no-op and has no effect on SQL statements ** that are started after the sqlite3_interrupt() call returns. ** ** If the database connection closes while [sqlite3_interrupt()] ** is running then bad things will likely happen. */ | | | 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 | ** ^A call to sqlite3_interrupt(D) that occurs when there are no running ** SQL statements is a no-op and has no effect on SQL statements ** that are started after the sqlite3_interrupt() call returns. ** ** If the database connection closes while [sqlite3_interrupt()] ** is running then bad things will likely happen. */ SQLITE_API void sqlite3_interrupt(sqlite3*); /* ** CAPI3REF: Determine If An SQL Statement Is Complete ** ** These routines are useful during command-line input to determine if the ** currently entered text seems to form a complete SQL statement or ** if additional input is needed before sending the text into |
︙ | ︙ | |||
2166 2167 2168 2169 2170 2171 2172 | ** ** The input to [sqlite3_complete()] must be a zero-terminated ** UTF-8 string. ** ** The input to [sqlite3_complete16()] must be a zero-terminated ** UTF-16 string in native byte order. */ | | | | 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 | ** ** The input to [sqlite3_complete()] must be a zero-terminated ** UTF-8 string. ** ** The input to [sqlite3_complete16()] must be a zero-terminated ** UTF-16 string in native byte order. */ SQLITE_API int sqlite3_complete(const char *sql); SQLITE_API int sqlite3_complete16(const void *sql); /* ** CAPI3REF: Register A Callback To Handle SQLITE_BUSY Errors ** KEYWORDS: {busy-handler callback} {busy handler} ** METHOD: sqlite3 ** ** ^The sqlite3_busy_handler(D,X,P) routine sets a callback function X |
︙ | ︙ | |||
2228 2229 2230 2231 2232 2233 2234 | ** database connection that invoked the busy handler. In other words, ** the busy handler is not reentrant. Any such actions ** result in undefined behavior. ** ** A busy handler must not close the database connection ** or [prepared statement] that invoked the busy handler. */ | | | 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 | ** database connection that invoked the busy handler. In other words, ** the busy handler is not reentrant. Any such actions ** result in undefined behavior. ** ** A busy handler must not close the database connection ** or [prepared statement] that invoked the busy handler. */ SQLITE_API int sqlite3_busy_handler(sqlite3*,int(*)(void*,int),void*); /* ** CAPI3REF: Set A Busy Timeout ** METHOD: sqlite3 ** ** ^This routine sets a [sqlite3_busy_handler | busy handler] that sleeps ** for a specified amount of time when a table is locked. ^The handler |
︙ | ︙ | |||
2251 2252 2253 2254 2255 2256 2257 | ** ^(There can only be a single busy handler for a particular ** [database connection] at any given moment. If another busy handler ** was defined (using [sqlite3_busy_handler()]) prior to calling ** this routine, that other busy handler is cleared.)^ ** ** See also: [PRAGMA busy_timeout] */ | | | 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 | ** ^(There can only be a single busy handler for a particular ** [database connection] at any given moment. If another busy handler ** was defined (using [sqlite3_busy_handler()]) prior to calling ** this routine, that other busy handler is cleared.)^ ** ** See also: [PRAGMA busy_timeout] */ SQLITE_API int sqlite3_busy_timeout(sqlite3*, int ms); /* ** CAPI3REF: Convenience Routines For Running Queries ** METHOD: sqlite3 ** ** This is a legacy interface that is preserved for backwards compatibility. ** Use of this interface is not recommended. |
︙ | ︙ | |||
2326 2327 2328 2329 2330 2331 2332 | ** [sqlite3_exec()]. The sqlite3_get_table() routine does not have access ** to any internal data structures of SQLite. It uses only the public ** interface defined here. As a consequence, errors that occur in the ** wrapper layer outside of the internal [sqlite3_exec()] call are not ** reflected in subsequent calls to [sqlite3_errcode()] or ** [sqlite3_errmsg()]. */ | | | | 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 | ** [sqlite3_exec()]. The sqlite3_get_table() routine does not have access ** to any internal data structures of SQLite. It uses only the public ** interface defined here. As a consequence, errors that occur in the ** wrapper layer outside of the internal [sqlite3_exec()] call are not ** reflected in subsequent calls to [sqlite3_errcode()] or ** [sqlite3_errmsg()]. */ SQLITE_API int sqlite3_get_table( sqlite3 *db, /* An open database */ const char *zSql, /* SQL to be evaluated */ char ***pazResult, /* Results of the query */ int *pnRow, /* Number of result rows written here */ int *pnColumn, /* Number of result columns written here */ char **pzErrmsg /* Error msg written here */ ); SQLITE_API void sqlite3_free_table(char **result); /* ** CAPI3REF: Formatted String Printing Functions ** ** These routines are work-alikes of the "printf()" family of functions ** from the standard C library. ** These routines understand most of the common K&R formatting options, |
︙ | ︙ | |||
2440 2441 2442 2443 2444 2445 2446 | ** character.)^ The "%w" formatting option is intended for safely inserting ** table and column names into a constructed SQL statement. ** ** ^(The "%z" formatting option works like "%s" but with the ** addition that after the string has been read and copied into ** the result, [sqlite3_free()] is called on the input string.)^ */ | | | | | | 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 | ** character.)^ The "%w" formatting option is intended for safely inserting ** table and column names into a constructed SQL statement. ** ** ^(The "%z" formatting option works like "%s" but with the ** addition that after the string has been read and copied into ** the result, [sqlite3_free()] is called on the input string.)^ */ SQLITE_API char *sqlite3_mprintf(const char*,...); SQLITE_API char *sqlite3_vmprintf(const char*, va_list); SQLITE_API char *sqlite3_snprintf(int,char*,const char*, ...); SQLITE_API char *sqlite3_vsnprintf(int,char*,const char*, va_list); /* ** CAPI3REF: Memory Allocation Subsystem ** ** The SQLite core uses these three routines for all of its own ** internal memory allocation needs. "Core" in the previous sentence ** does not include operating-system specific VFS implementation. The |
︙ | ︙ | |||
2533 2534 2535 2536 2537 2538 2539 | ** invocation of [sqlite3_malloc()] or [sqlite3_realloc()] that have ** not yet been released. ** ** The application must not read or write any part of ** a block of memory after it has been released using ** [sqlite3_free()] or [sqlite3_realloc()]. */ | | | | | | | | 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 | ** invocation of [sqlite3_malloc()] or [sqlite3_realloc()] that have ** not yet been released. ** ** The application must not read or write any part of ** a block of memory after it has been released using ** [sqlite3_free()] or [sqlite3_realloc()]. */ SQLITE_API void *sqlite3_malloc(int); SQLITE_API void *sqlite3_malloc64(sqlite3_uint64); SQLITE_API void *sqlite3_realloc(void*, int); SQLITE_API void *sqlite3_realloc64(void*, sqlite3_uint64); SQLITE_API void sqlite3_free(void*); SQLITE_API sqlite3_uint64 sqlite3_msize(void*); /* ** CAPI3REF: Memory Allocator Statistics ** ** SQLite provides these two interfaces for reporting on the status ** of the [sqlite3_malloc()], [sqlite3_free()], and [sqlite3_realloc()] ** routines, which form the built-in memory allocation subsystem. |
︙ | ︙ | |||
2563 2564 2565 2566 2567 2568 2569 | ** ** ^The memory high-water mark is reset to the current value of ** [sqlite3_memory_used()] if and only if the parameter to ** [sqlite3_memory_highwater()] is true. ^The value returned ** by [sqlite3_memory_highwater(1)] is the high-water mark ** prior to the reset. */ | | | | 2616 2617 2618 2619 2620 2621 2622 2623 2624 2625 2626 2627 2628 2629 2630 2631 | ** ** ^The memory high-water mark is reset to the current value of ** [sqlite3_memory_used()] if and only if the parameter to ** [sqlite3_memory_highwater()] is true. ^The value returned ** by [sqlite3_memory_highwater(1)] is the high-water mark ** prior to the reset. */ SQLITE_API sqlite3_int64 sqlite3_memory_used(void); SQLITE_API sqlite3_int64 sqlite3_memory_highwater(int resetFlag); /* ** CAPI3REF: Pseudo-Random Number Generator ** ** SQLite contains a high-quality pseudo-random number generator (PRNG) used to ** select random [ROWID | ROWIDs] when inserting new records into a table that ** already uses the largest possible [ROWID]. The PRNG is also used for |
︙ | ︙ | |||
2587 2588 2589 2590 2591 2592 2593 | ** seeded using randomness obtained from the xRandomness method of ** the default [sqlite3_vfs] object. ** ^If the previous call to this routine had an N of 1 or more and a ** non-NULL P then the pseudo-randomness is generated ** internally and without recourse to the [sqlite3_vfs] xRandomness ** method. */ | | | 2640 2641 2642 2643 2644 2645 2646 2647 2648 2649 2650 2651 2652 2653 2654 | ** seeded using randomness obtained from the xRandomness method of ** the default [sqlite3_vfs] object. ** ^If the previous call to this routine had an N of 1 or more and a ** non-NULL P then the pseudo-randomness is generated ** internally and without recourse to the [sqlite3_vfs] xRandomness ** method. */ SQLITE_API void sqlite3_randomness(int N, void *P); /* ** CAPI3REF: Compile-Time Authorization Callbacks ** METHOD: sqlite3 ** ** ^This routine registers an authorizer callback with a particular ** [database connection], supplied in the first argument. |
︙ | ︙ | |||
2670 2671 2672 2673 2674 2675 2676 | ** ** ^Note that the authorizer callback is invoked only during ** [sqlite3_prepare()] or its variants. Authorization is not ** performed during statement evaluation in [sqlite3_step()], unless ** as stated in the previous paragraph, sqlite3_step() invokes ** sqlite3_prepare_v2() to reprepare a statement after a schema change. */ | | | 2723 2724 2725 2726 2727 2728 2729 2730 2731 2732 2733 2734 2735 2736 2737 | ** ** ^Note that the authorizer callback is invoked only during ** [sqlite3_prepare()] or its variants. Authorization is not ** performed during statement evaluation in [sqlite3_step()], unless ** as stated in the previous paragraph, sqlite3_step() invokes ** sqlite3_prepare_v2() to reprepare a statement after a schema change. */ SQLITE_API int sqlite3_set_authorizer( sqlite3*, int (*xAuth)(void*,int,const char*,const char*,const char*,const char*), void *pUserData ); /* ** CAPI3REF: Authorizer Return Codes |
︙ | ︙ | |||
2749 2750 2751 2752 2753 2754 2755 2756 2757 2758 2759 2760 2761 2762 | #define SQLITE_SAVEPOINT 32 /* Operation Savepoint Name */ #define SQLITE_COPY 0 /* No longer used */ #define SQLITE_RECURSIVE 33 /* NULL NULL */ /* ** CAPI3REF: Tracing And Profiling Functions ** METHOD: sqlite3 ** ** These routines register callback functions that can be used for ** tracing and profiling the execution of SQL statements. ** ** ^The callback function registered by sqlite3_trace() is invoked at ** various times when an SQL statement is being run by [sqlite3_step()]. ** ^The sqlite3_trace() callback is invoked with a UTF-8 rendering of the | > > > | 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 2812 2813 2814 2815 2816 2817 2818 | #define SQLITE_SAVEPOINT 32 /* Operation Savepoint Name */ #define SQLITE_COPY 0 /* No longer used */ #define SQLITE_RECURSIVE 33 /* NULL NULL */ /* ** CAPI3REF: Tracing And Profiling Functions ** METHOD: sqlite3 ** ** These routines are deprecated. Use the [sqlite3_trace_v2()] interface ** instead of the routines described here. ** ** These routines register callback functions that can be used for ** tracing and profiling the execution of SQL statements. ** ** ^The callback function registered by sqlite3_trace() is invoked at ** various times when an SQL statement is being run by [sqlite3_step()]. ** ^The sqlite3_trace() callback is invoked with a UTF-8 rendering of the |
︙ | ︙ | |||
2775 2776 2777 2778 2779 2780 2781 | ** time is in units of nanoseconds, however the current implementation ** is only capable of millisecond resolution so the six least significant ** digits in the time are meaningless. Future versions of SQLite ** might provide greater resolution on the profiler callback. The ** sqlite3_profile() function is considered experimental and is ** subject to change in future versions of SQLite. */ | > | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 2831 2832 2833 2834 2835 2836 2837 2838 2839 2840 2841 2842 2843 2844 2845 2846 2847 2848 2849 2850 2851 2852 2853 2854 2855 2856 2857 2858 2859 2860 2861 2862 2863 2864 2865 2866 2867 2868 2869 2870 2871 2872 2873 2874 2875 2876 2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 2888 2889 2890 2891 2892 2893 2894 2895 2896 2897 2898 2899 2900 2901 2902 2903 2904 2905 2906 2907 2908 2909 2910 2911 2912 2913 2914 2915 2916 2917 2918 2919 2920 2921 2922 2923 2924 2925 2926 2927 2928 2929 2930 2931 2932 2933 2934 2935 2936 2937 2938 2939 2940 2941 | ** time is in units of nanoseconds, however the current implementation ** is only capable of millisecond resolution so the six least significant ** digits in the time are meaningless. Future versions of SQLite ** might provide greater resolution on the profiler callback. The ** sqlite3_profile() function is considered experimental and is ** subject to change in future versions of SQLite. */ SQLITE_API SQLITE_DEPRECATED void *sqlite3_trace(sqlite3*, void(*xTrace)(void*,const char*), void*); SQLITE_API SQLITE_DEPRECATED void *sqlite3_profile(sqlite3*, void(*xProfile)(void*,const char*,sqlite3_uint64), void*); /* ** CAPI3REF: SQL Trace Event Codes ** KEYWORDS: SQLITE_TRACE ** ** These constants identify classes of events that can be monitored ** using the [sqlite3_trace_v2()] tracing logic. The third argument ** to [sqlite3_trace_v2()] is an OR-ed combination of one or more of ** the following constants. ^The first argument to the trace callback ** is one of the following constants. ** ** New tracing constants may be added in future releases. ** ** ^A trace callback has four arguments: xCallback(T,C,P,X). ** ^The T argument is one of the integer type codes above. ** ^The C argument is a copy of the context pointer passed in as the ** fourth argument to [sqlite3_trace_v2()]. ** The P and X arguments are pointers whose meanings depend on T. ** ** <dl> ** [[SQLITE_TRACE_STMT]] <dt>SQLITE_TRACE_STMT</dt> ** <dd>^An SQLITE_TRACE_STMT callback is invoked when a prepared statement ** first begins running and possibly at other times during the ** execution of the prepared statement, such as at the start of each ** trigger subprogram. ^The P argument is a pointer to the ** [prepared statement]. ^The X argument is a pointer to a string which ** is the unexpanded SQL text of the prepared statement or an SQL comment ** that indicates the invocation of a trigger. ^The callback can compute ** the same text that would have been returned by the legacy [sqlite3_trace()] ** interface by using the X argument when X begins with "--" and invoking ** [sqlite3_expanded_sql(P)] otherwise. ** ** [[SQLITE_TRACE_PROFILE]] <dt>SQLITE_TRACE_PROFILE</dt> ** <dd>^An SQLITE_TRACE_PROFILE callback provides approximately the same ** information as is provided by the [sqlite3_profile()] callback. ** ^The P argument is a pointer to the [prepared statement] and the ** X argument points to a 64-bit integer which is the estimated of ** the number of nanosecond that the prepared statement took to run. ** ^The SQLITE_TRACE_PROFILE callback is invoked when the statement finishes. ** ** [[SQLITE_TRACE_ROW]] <dt>SQLITE_TRACE_ROW</dt> ** <dd>^An SQLITE_TRACE_ROW callback is invoked whenever a prepared ** statement generates a single row of result. ** ^The P argument is a pointer to the [prepared statement] and the ** X argument is unused. ** ** [[SQLITE_TRACE_CLOSE]] <dt>SQLITE_TRACE_CLOSE</dt> ** <dd>^An SQLITE_TRACE_CLOSE callback is invoked when a database ** connection closes. ** ^The P argument is a pointer to the [database connection] object ** and the X argument is unused. ** </dl> */ #define SQLITE_TRACE_STMT 0x01 #define SQLITE_TRACE_PROFILE 0x02 #define SQLITE_TRACE_ROW 0x04 #define SQLITE_TRACE_CLOSE 0x08 /* ** CAPI3REF: SQL Trace Hook ** METHOD: sqlite3 ** ** ^The sqlite3_trace_v2(D,M,X,P) interface registers a trace callback ** function X against [database connection] D, using property mask M ** and context pointer P. ^If the X callback is ** NULL or if the M mask is zero, then tracing is disabled. The ** M argument should be the bitwise OR-ed combination of ** zero or more [SQLITE_TRACE] constants. ** ** ^Each call to either sqlite3_trace() or sqlite3_trace_v2() overrides ** (cancels) any prior calls to sqlite3_trace() or sqlite3_trace_v2(). ** ** ^The X callback is invoked whenever any of the events identified by ** mask M occur. ^The integer return value from the callback is currently ** ignored, though this may change in future releases. Callback ** implementations should return zero to ensure future compatibility. ** ** ^A trace callback is invoked with four arguments: callback(T,C,P,X). ** ^The T argument is one of the [SQLITE_TRACE] ** constants to indicate why the callback was invoked. ** ^The C argument is a copy of the context pointer. ** The P and X arguments are pointers whose meanings depend on T. ** ** The sqlite3_trace_v2() interface is intended to replace the legacy ** interfaces [sqlite3_trace()] and [sqlite3_profile()], both of which ** are deprecated. */ SQLITE_API int sqlite3_trace_v2( sqlite3*, unsigned uMask, int(*xCallback)(unsigned,void*,void*,void*), void *pCtx ); /* ** CAPI3REF: Query Progress Callbacks ** METHOD: sqlite3 ** ** ^The sqlite3_progress_handler(D,N,X,P) interface causes the callback ** function X to be invoked periodically during long running calls to |
︙ | ︙ | |||
2811 2812 2813 2814 2815 2816 2817 | ** ** The progress handler callback must not do anything that will modify ** the database connection that invoked the progress handler. ** Note that [sqlite3_prepare_v2()] and [sqlite3_step()] both modify their ** database connections for the meaning of "modify" in this paragraph. ** */ | | | 2961 2962 2963 2964 2965 2966 2967 2968 2969 2970 2971 2972 2973 2974 2975 | ** ** The progress handler callback must not do anything that will modify ** the database connection that invoked the progress handler. ** Note that [sqlite3_prepare_v2()] and [sqlite3_step()] both modify their ** database connections for the meaning of "modify" in this paragraph. ** */ SQLITE_API void sqlite3_progress_handler(sqlite3*, int, int(*)(void*), void*); /* ** CAPI3REF: Opening A New Database Connection ** CONSTRUCTOR: sqlite3 ** ** ^These routines open an SQLite database file as specified by the ** filename argument. ^The filename argument is interpreted as UTF-8 for |
︙ | ︙ | |||
3040 3041 3042 3043 3044 3045 3046 | ** ** <b>Note to Windows Runtime users:</b> The temporary directory must be set ** prior to calling sqlite3_open() or sqlite3_open_v2(). Otherwise, various ** features that require the use of temporary files may fail. ** ** See also: [sqlite3_temp_directory] */ | | | | | 3190 3191 3192 3193 3194 3195 3196 3197 3198 3199 3200 3201 3202 3203 3204 3205 3206 3207 3208 3209 3210 3211 3212 | ** ** <b>Note to Windows Runtime users:</b> The temporary directory must be set ** prior to calling sqlite3_open() or sqlite3_open_v2(). Otherwise, various ** features that require the use of temporary files may fail. ** ** See also: [sqlite3_temp_directory] */ SQLITE_API int sqlite3_open( const char *filename, /* Database filename (UTF-8) */ sqlite3 **ppDb /* OUT: SQLite db handle */ ); SQLITE_API int sqlite3_open16( const void *filename, /* Database filename (UTF-16) */ sqlite3 **ppDb /* OUT: SQLite db handle */ ); SQLITE_API int sqlite3_open_v2( const char *filename, /* Database filename (UTF-8) */ sqlite3 **ppDb, /* OUT: SQLite db handle */ int flags, /* Flags */ const char *zVfs /* Name of VFS module to use */ ); /* |
︙ | ︙ | |||
3094 3095 3096 3097 3098 3099 3100 | ** ** If F is a NULL pointer, then sqlite3_uri_parameter(F,P) returns NULL and ** sqlite3_uri_boolean(F,P,B) returns B. If F is not a NULL pointer and ** is not a database file pathname pointer that SQLite passed into the xOpen ** VFS method, then the behavior of this routine is undefined and probably ** undesirable. */ | | | | | 3244 3245 3246 3247 3248 3249 3250 3251 3252 3253 3254 3255 3256 3257 3258 3259 3260 | ** ** If F is a NULL pointer, then sqlite3_uri_parameter(F,P) returns NULL and ** sqlite3_uri_boolean(F,P,B) returns B. If F is not a NULL pointer and ** is not a database file pathname pointer that SQLite passed into the xOpen ** VFS method, then the behavior of this routine is undefined and probably ** undesirable. */ SQLITE_API const char *sqlite3_uri_parameter(const char *zFilename, const char *zParam); SQLITE_API int sqlite3_uri_boolean(const char *zFile, const char *zParam, int bDefault); SQLITE_API sqlite3_int64 sqlite3_uri_int64(const char*, const char*, sqlite3_int64); /* ** CAPI3REF: Error Codes And Messages ** METHOD: sqlite3 ** ** ^If the most recent sqlite3_* API call associated with |
︙ | ︙ | |||
3140 3141 3142 3143 3144 3145 3146 | ** to use D and invoking [sqlite3_mutex_leave]([sqlite3_db_mutex](D)) after ** all calls to the interfaces listed here are completed. ** ** If an interface fails with SQLITE_MISUSE, that means the interface ** was invoked incorrectly by the application. In that case, the ** error code and message may or may not be set. */ | | | | | | | 3290 3291 3292 3293 3294 3295 3296 3297 3298 3299 3300 3301 3302 3303 3304 3305 3306 3307 3308 | ** to use D and invoking [sqlite3_mutex_leave]([sqlite3_db_mutex](D)) after ** all calls to the interfaces listed here are completed. ** ** If an interface fails with SQLITE_MISUSE, that means the interface ** was invoked incorrectly by the application. In that case, the ** error code and message may or may not be set. */ SQLITE_API int sqlite3_errcode(sqlite3 *db); SQLITE_API int sqlite3_extended_errcode(sqlite3 *db); SQLITE_API const char *sqlite3_errmsg(sqlite3*); SQLITE_API const void *sqlite3_errmsg16(sqlite3*); SQLITE_API const char *sqlite3_errstr(int); /* ** CAPI3REF: Prepared Statement Object ** KEYWORDS: {prepared statement} {prepared statements} ** ** An instance of this object represents a single SQL statement that ** has been compiled into binary form and is ready to be evaluated. |
︙ | ︙ | |||
3212 3213 3214 3215 3216 3217 3218 | ** attack. Developers might also want to use the [sqlite3_set_authorizer()] ** interface to further control untrusted SQL. The size of the database ** created by an untrusted script can be contained using the ** [max_page_count] [PRAGMA]. ** ** New run-time limit categories may be added in future releases. */ | | | 3362 3363 3364 3365 3366 3367 3368 3369 3370 3371 3372 3373 3374 3375 3376 | ** attack. Developers might also want to use the [sqlite3_set_authorizer()] ** interface to further control untrusted SQL. The size of the database ** created by an untrusted script can be contained using the ** [max_page_count] [PRAGMA]. ** ** New run-time limit categories may be added in future releases. */ SQLITE_API int sqlite3_limit(sqlite3*, int id, int newVal); /* ** CAPI3REF: Run-Time Limit Categories ** KEYWORDS: {limit category} {*limit categories} ** ** These constants define various performance limits ** that can be lowered at run-time using [sqlite3_limit()]. |
︙ | ︙ | |||
3364 3365 3366 3367 3368 3369 3370 | ** ^The specific value of WHERE-clause [parameter] might influence the ** choice of query plan if the parameter is the left-hand side of a [LIKE] ** or [GLOB] operator or if the parameter is compared to an indexed column ** and the [SQLITE_ENABLE_STAT3] compile-time option is enabled. ** </li> ** </ol> */ | | | | | | | | > > > > > > > > > > > > > > > > > > > > > > > | > | 3514 3515 3516 3517 3518 3519 3520 3521 3522 3523 3524 3525 3526 3527 3528 3529 3530 3531 3532 3533 3534 3535 3536 3537 3538 3539 3540 3541 3542 3543 3544 3545 3546 3547 3548 3549 3550 3551 3552 3553 3554 3555 3556 3557 3558 3559 3560 3561 3562 3563 3564 3565 3566 3567 3568 3569 3570 3571 3572 3573 3574 3575 3576 3577 3578 3579 3580 3581 3582 3583 3584 3585 3586 3587 3588 3589 | ** ^The specific value of WHERE-clause [parameter] might influence the ** choice of query plan if the parameter is the left-hand side of a [LIKE] ** or [GLOB] operator or if the parameter is compared to an indexed column ** and the [SQLITE_ENABLE_STAT3] compile-time option is enabled. ** </li> ** </ol> */ SQLITE_API int sqlite3_prepare( sqlite3 *db, /* Database handle */ const char *zSql, /* SQL statement, UTF-8 encoded */ int nByte, /* Maximum length of zSql in bytes. */ sqlite3_stmt **ppStmt, /* OUT: Statement handle */ const char **pzTail /* OUT: Pointer to unused portion of zSql */ ); SQLITE_API int sqlite3_prepare_v2( sqlite3 *db, /* Database handle */ const char *zSql, /* SQL statement, UTF-8 encoded */ int nByte, /* Maximum length of zSql in bytes. */ sqlite3_stmt **ppStmt, /* OUT: Statement handle */ const char **pzTail /* OUT: Pointer to unused portion of zSql */ ); SQLITE_API int sqlite3_prepare16( sqlite3 *db, /* Database handle */ const void *zSql, /* SQL statement, UTF-16 encoded */ int nByte, /* Maximum length of zSql in bytes. */ sqlite3_stmt **ppStmt, /* OUT: Statement handle */ const void **pzTail /* OUT: Pointer to unused portion of zSql */ ); SQLITE_API int sqlite3_prepare16_v2( sqlite3 *db, /* Database handle */ const void *zSql, /* SQL statement, UTF-16 encoded */ int nByte, /* Maximum length of zSql in bytes. */ sqlite3_stmt **ppStmt, /* OUT: Statement handle */ const void **pzTail /* OUT: Pointer to unused portion of zSql */ ); /* ** CAPI3REF: Retrieving Statement SQL ** METHOD: sqlite3_stmt ** ** ^The sqlite3_sql(P) interface returns a pointer to a copy of the UTF-8 ** SQL text used to create [prepared statement] P if P was ** created by either [sqlite3_prepare_v2()] or [sqlite3_prepare16_v2()]. ** ^The sqlite3_expanded_sql(P) interface returns a pointer to a UTF-8 ** string containing the SQL text of prepared statement P with ** [bound parameters] expanded. ** ** ^(For example, if a prepared statement is created using the SQL ** text "SELECT $abc,:xyz" and if parameter $abc is bound to integer 2345 ** and parameter :xyz is unbound, then sqlite3_sql() will return ** the original string, "SELECT $abc,:xyz" but sqlite3_expanded_sql() ** will return "SELECT 2345,NULL".)^ ** ** ^The sqlite3_expanded_sql() interface returns NULL if insufficient memory ** is available to hold the result, or if the result would exceed the ** the maximum string length determined by the [SQLITE_LIMIT_LENGTH]. ** ** ^The [SQLITE_TRACE_SIZE_LIMIT] compile-time option limits the size of ** bound parameter expansions. ^The [SQLITE_OMIT_TRACE] compile-time ** option causes sqlite3_expanded_sql() to always return NULL. ** ** ^The string returned by sqlite3_sql(P) is managed by SQLite and is ** automatically freed when the prepared statement is finalized. ** ^The string returned by sqlite3_expanded_sql(P), on the other hand, ** is obtained from [sqlite3_malloc()] and must be free by the application ** by passing it to [sqlite3_free()]. */ SQLITE_API const char *sqlite3_sql(sqlite3_stmt *pStmt); SQLITE_API char *sqlite3_expanded_sql(sqlite3_stmt *pStmt); /* ** CAPI3REF: Determine If An SQL Statement Writes The Database ** METHOD: sqlite3_stmt ** ** ^The sqlite3_stmt_readonly(X) interface returns true (non-zero) if ** and only if the [prepared statement] X makes no direct changes to |
︙ | ︙ | |||
3433 3434 3435 3436 3437 3438 3439 | ** since the statements themselves do not actually modify the database but ** rather they control the timing of when other statements modify the ** database. ^The [ATTACH] and [DETACH] statements also cause ** sqlite3_stmt_readonly() to return true since, while those statements ** change the configuration of a database connection, they do not make ** changes to the content of the database files on disk. */ | | | | 3607 3608 3609 3610 3611 3612 3613 3614 3615 3616 3617 3618 3619 3620 3621 3622 3623 3624 3625 3626 3627 3628 3629 3630 3631 3632 3633 3634 3635 3636 3637 3638 3639 3640 3641 3642 | ** since the statements themselves do not actually modify the database but ** rather they control the timing of when other statements modify the ** database. ^The [ATTACH] and [DETACH] statements also cause ** sqlite3_stmt_readonly() to return true since, while those statements ** change the configuration of a database connection, they do not make ** changes to the content of the database files on disk. */ SQLITE_API int sqlite3_stmt_readonly(sqlite3_stmt *pStmt); /* ** CAPI3REF: Determine If A Prepared Statement Has Been Reset ** METHOD: sqlite3_stmt ** ** ^The sqlite3_stmt_busy(S) interface returns true (non-zero) if the ** [prepared statement] S has been stepped at least once using ** [sqlite3_step(S)] but has neither run to completion (returned ** [SQLITE_DONE] from [sqlite3_step(S)]) nor ** been reset using [sqlite3_reset(S)]. ^The sqlite3_stmt_busy(S) ** interface returns false if S is a NULL pointer. If S is not a ** NULL pointer and is not a pointer to a valid [prepared statement] ** object, then the behavior is undefined and probably undesirable. ** ** This interface can be used in combination [sqlite3_next_stmt()] ** to locate all prepared statements associated with a database ** connection that are in need of being reset. This can be used, ** for example, in diagnostic routines to search for prepared ** statements that are holding a transaction open. */ SQLITE_API int sqlite3_stmt_busy(sqlite3_stmt*); /* ** CAPI3REF: Dynamically Typed Value Object ** KEYWORDS: {protected sqlite3_value} {unprotected sqlite3_value} ** ** SQLite uses the sqlite3_value object to represent all values ** that can be stored in a database table. SQLite uses dynamic typing |
︙ | ︙ | |||
3618 3619 3620 3621 3622 3623 3624 | ** [SQLITE_MAX_LENGTH]. ** ^[SQLITE_RANGE] is returned if the parameter ** index is out of range. ^[SQLITE_NOMEM] is returned if malloc() fails. ** ** See also: [sqlite3_bind_parameter_count()], ** [sqlite3_bind_parameter_name()], and [sqlite3_bind_parameter_index()]. */ | | | | | | | | | | | | | | | 3792 3793 3794 3795 3796 3797 3798 3799 3800 3801 3802 3803 3804 3805 3806 3807 3808 3809 3810 3811 3812 3813 3814 3815 3816 3817 3818 3819 3820 3821 3822 3823 3824 3825 3826 3827 3828 3829 3830 3831 3832 3833 3834 3835 3836 3837 3838 3839 3840 | ** [SQLITE_MAX_LENGTH]. ** ^[SQLITE_RANGE] is returned if the parameter ** index is out of range. ^[SQLITE_NOMEM] is returned if malloc() fails. ** ** See also: [sqlite3_bind_parameter_count()], ** [sqlite3_bind_parameter_name()], and [sqlite3_bind_parameter_index()]. */ SQLITE_API int sqlite3_bind_blob(sqlite3_stmt*, int, const void*, int n, void(*)(void*)); SQLITE_API int sqlite3_bind_blob64(sqlite3_stmt*, int, const void*, sqlite3_uint64, void(*)(void*)); SQLITE_API int sqlite3_bind_double(sqlite3_stmt*, int, double); SQLITE_API int sqlite3_bind_int(sqlite3_stmt*, int, int); SQLITE_API int sqlite3_bind_int64(sqlite3_stmt*, int, sqlite3_int64); SQLITE_API int sqlite3_bind_null(sqlite3_stmt*, int); SQLITE_API int sqlite3_bind_text(sqlite3_stmt*,int,const char*,int,void(*)(void*)); SQLITE_API int sqlite3_bind_text16(sqlite3_stmt*, int, const void*, int, void(*)(void*)); SQLITE_API int sqlite3_bind_text64(sqlite3_stmt*, int, const char*, sqlite3_uint64, void(*)(void*), unsigned char encoding); SQLITE_API int sqlite3_bind_value(sqlite3_stmt*, int, const sqlite3_value*); SQLITE_API int sqlite3_bind_zeroblob(sqlite3_stmt*, int, int n); SQLITE_API int sqlite3_bind_zeroblob64(sqlite3_stmt*, int, sqlite3_uint64); /* ** CAPI3REF: Number Of SQL Parameters ** METHOD: sqlite3_stmt ** ** ^This routine can be used to find the number of [SQL parameters] ** in a [prepared statement]. SQL parameters are tokens of the ** form "?", "?NNN", ":AAA", "$AAA", or "@AAA" that serve as ** placeholders for values that are [sqlite3_bind_blob | bound] ** to the parameters at a later time. ** ** ^(This routine actually returns the index of the largest (rightmost) ** parameter. For all forms except ?NNN, this will correspond to the ** number of unique parameters. If parameters of the ?NNN form are used, ** there may be gaps in the list.)^ ** ** See also: [sqlite3_bind_blob|sqlite3_bind()], ** [sqlite3_bind_parameter_name()], and ** [sqlite3_bind_parameter_index()]. */ SQLITE_API int sqlite3_bind_parameter_count(sqlite3_stmt*); /* ** CAPI3REF: Name Of A Host Parameter ** METHOD: sqlite3_stmt ** ** ^The sqlite3_bind_parameter_name(P,N) interface returns ** the name of the N-th [SQL parameter] in the [prepared statement] P. |
︙ | ︙ | |||
3680 3681 3682 3683 3684 3685 3686 | ** originally specified as UTF-16 in [sqlite3_prepare16()] or ** [sqlite3_prepare16_v2()]. ** ** See also: [sqlite3_bind_blob|sqlite3_bind()], ** [sqlite3_bind_parameter_count()], and ** [sqlite3_bind_parameter_index()]. */ | | | | | | 3854 3855 3856 3857 3858 3859 3860 3861 3862 3863 3864 3865 3866 3867 3868 3869 3870 3871 3872 3873 3874 3875 3876 3877 3878 3879 3880 3881 3882 3883 3884 3885 3886 3887 3888 3889 3890 3891 3892 3893 3894 3895 3896 3897 3898 3899 3900 3901 3902 3903 3904 3905 3906 3907 | ** originally specified as UTF-16 in [sqlite3_prepare16()] or ** [sqlite3_prepare16_v2()]. ** ** See also: [sqlite3_bind_blob|sqlite3_bind()], ** [sqlite3_bind_parameter_count()], and ** [sqlite3_bind_parameter_index()]. */ SQLITE_API const char *sqlite3_bind_parameter_name(sqlite3_stmt*, int); /* ** CAPI3REF: Index Of A Parameter With A Given Name ** METHOD: sqlite3_stmt ** ** ^Return the index of an SQL parameter given its name. ^The ** index value returned is suitable for use as the second ** parameter to [sqlite3_bind_blob|sqlite3_bind()]. ^A zero ** is returned if no matching parameter is found. ^The parameter ** name must be given in UTF-8 even if the original statement ** was prepared from UTF-16 text using [sqlite3_prepare16_v2()]. ** ** See also: [sqlite3_bind_blob|sqlite3_bind()], ** [sqlite3_bind_parameter_count()], and ** [sqlite3_bind_parameter_name()]. */ SQLITE_API int sqlite3_bind_parameter_index(sqlite3_stmt*, const char *zName); /* ** CAPI3REF: Reset All Bindings On A Prepared Statement ** METHOD: sqlite3_stmt ** ** ^Contrary to the intuition of many, [sqlite3_reset()] does not reset ** the [sqlite3_bind_blob | bindings] on a [prepared statement]. ** ^Use this routine to reset all host parameters to NULL. */ SQLITE_API int sqlite3_clear_bindings(sqlite3_stmt*); /* ** CAPI3REF: Number Of Columns In A Result Set ** METHOD: sqlite3_stmt ** ** ^Return the number of columns in the result set returned by the ** [prepared statement]. ^This routine returns 0 if pStmt is an SQL ** statement that does not return data (for example an [UPDATE]). ** ** See also: [sqlite3_data_count()] */ SQLITE_API int sqlite3_column_count(sqlite3_stmt *pStmt); /* ** CAPI3REF: Column Names In A Result Set ** METHOD: sqlite3_stmt ** ** ^These routines return the name assigned to a particular column ** in the result set of a [SELECT] statement. ^The sqlite3_column_name() |
︙ | ︙ | |||
3748 3749 3750 3751 3752 3753 3754 | ** NULL pointer is returned. ** ** ^The name of a result column is the value of the "AS" clause for ** that column, if there is an AS clause. If there is no AS clause ** then the name of the column is unspecified and may change from ** one release of SQLite to the next. */ | | | | 3922 3923 3924 3925 3926 3927 3928 3929 3930 3931 3932 3933 3934 3935 3936 3937 | ** NULL pointer is returned. ** ** ^The name of a result column is the value of the "AS" clause for ** that column, if there is an AS clause. If there is no AS clause ** then the name of the column is unspecified and may change from ** one release of SQLite to the next. */ SQLITE_API const char *sqlite3_column_name(sqlite3_stmt*, int N); SQLITE_API const void *sqlite3_column_name16(sqlite3_stmt*, int N); /* ** CAPI3REF: Source Of Data In A Query Result ** METHOD: sqlite3_stmt ** ** ^These routines provide a means to determine the database, table, and ** table column that is the origin of a particular result column in |
︙ | ︙ | |||
3797 3798 3799 3800 3801 3802 3803 | ** undefined. ** ** If two or more threads call one or more ** [sqlite3_column_database_name | column metadata interfaces] ** for the same [prepared statement] and result column ** at the same time then the results are undefined. */ | | | | | | | | 3971 3972 3973 3974 3975 3976 3977 3978 3979 3980 3981 3982 3983 3984 3985 3986 3987 3988 3989 3990 | ** undefined. ** ** If two or more threads call one or more ** [sqlite3_column_database_name | column metadata interfaces] ** for the same [prepared statement] and result column ** at the same time then the results are undefined. */ SQLITE_API const char *sqlite3_column_database_name(sqlite3_stmt*,int); SQLITE_API const void *sqlite3_column_database_name16(sqlite3_stmt*,int); SQLITE_API const char *sqlite3_column_table_name(sqlite3_stmt*,int); SQLITE_API const void *sqlite3_column_table_name16(sqlite3_stmt*,int); SQLITE_API const char *sqlite3_column_origin_name(sqlite3_stmt*,int); SQLITE_API const void *sqlite3_column_origin_name16(sqlite3_stmt*,int); /* ** CAPI3REF: Declared Datatype Of A Query Result ** METHOD: sqlite3_stmt ** ** ^(The first parameter is a [prepared statement]. ** If this statement is a [SELECT] statement and the Nth column of the |
︙ | ︙ | |||
3834 3835 3836 3837 3838 3839 3840 | ** ^SQLite uses dynamic run-time typing. ^So just because a column ** is declared to contain a particular type does not mean that the ** data stored in that column is of the declared type. SQLite is ** strongly typed, but the typing is dynamic not static. ^Type ** is associated with individual values, not with the containers ** used to hold those values. */ | | | | 4008 4009 4010 4011 4012 4013 4014 4015 4016 4017 4018 4019 4020 4021 4022 4023 | ** ^SQLite uses dynamic run-time typing. ^So just because a column ** is declared to contain a particular type does not mean that the ** data stored in that column is of the declared type. SQLite is ** strongly typed, but the typing is dynamic not static. ^Type ** is associated with individual values, not with the containers ** used to hold those values. */ SQLITE_API const char *sqlite3_column_decltype(sqlite3_stmt*,int); SQLITE_API const void *sqlite3_column_decltype16(sqlite3_stmt*,int); /* ** CAPI3REF: Evaluate An SQL Statement ** METHOD: sqlite3_stmt ** ** After a [prepared statement] has been prepared using either ** [sqlite3_prepare_v2()] or [sqlite3_prepare16_v2()] or one of the legacy |
︙ | ︙ | |||
3896 3897 3898 3899 3900 3901 3902 | ** more threads at the same moment in time. ** ** For all versions of SQLite up to and including 3.6.23.1, a call to ** [sqlite3_reset()] was required after sqlite3_step() returned anything ** other than [SQLITE_ROW] before any subsequent invocation of ** sqlite3_step(). Failure to reset the prepared statement using ** [sqlite3_reset()] would result in an [SQLITE_MISUSE] return from | > | | | | 4070 4071 4072 4073 4074 4075 4076 4077 4078 4079 4080 4081 4082 4083 4084 4085 4086 4087 4088 4089 4090 4091 4092 4093 4094 4095 4096 4097 4098 4099 4100 4101 4102 4103 4104 4105 4106 4107 4108 4109 4110 4111 4112 4113 4114 4115 4116 4117 4118 4119 4120 4121 4122 4123 4124 4125 | ** more threads at the same moment in time. ** ** For all versions of SQLite up to and including 3.6.23.1, a call to ** [sqlite3_reset()] was required after sqlite3_step() returned anything ** other than [SQLITE_ROW] before any subsequent invocation of ** sqlite3_step(). Failure to reset the prepared statement using ** [sqlite3_reset()] would result in an [SQLITE_MISUSE] return from ** sqlite3_step(). But after [version 3.6.23.1] ([dateof:3.6.23.1], ** sqlite3_step() began ** calling [sqlite3_reset()] automatically in this circumstance rather ** than returning [SQLITE_MISUSE]. This is not considered a compatibility ** break because any application that ever receives an SQLITE_MISUSE error ** is broken by definition. The [SQLITE_OMIT_AUTORESET] compile-time option ** can be used to restore the legacy behavior. ** ** <b>Goofy Interface Alert:</b> In the legacy interface, the sqlite3_step() ** API always returns a generic error code, [SQLITE_ERROR], following any ** error other than [SQLITE_BUSY] and [SQLITE_MISUSE]. You must call ** [sqlite3_reset()] or [sqlite3_finalize()] in order to find one of the ** specific [error codes] that better describes the error. ** We admit that this is a goofy design. The problem has been fixed ** with the "v2" interface. If you prepare all of your SQL statements ** using either [sqlite3_prepare_v2()] or [sqlite3_prepare16_v2()] instead ** of the legacy [sqlite3_prepare()] and [sqlite3_prepare16()] interfaces, ** then the more specific [error codes] are returned directly ** by sqlite3_step(). The use of the "v2" interface is recommended. */ SQLITE_API int sqlite3_step(sqlite3_stmt*); /* ** CAPI3REF: Number of columns in a result set ** METHOD: sqlite3_stmt ** ** ^The sqlite3_data_count(P) interface returns the number of columns in the ** current row of the result set of [prepared statement] P. ** ^If prepared statement P does not have results ready to return ** (via calls to the [sqlite3_column_int | sqlite3_column_*()] of ** interfaces) then sqlite3_data_count(P) returns 0. ** ^The sqlite3_data_count(P) routine also returns 0 if P is a NULL pointer. ** ^The sqlite3_data_count(P) routine returns 0 if the previous call to ** [sqlite3_step](P) returned [SQLITE_DONE]. ^The sqlite3_data_count(P) ** will return non-zero if previous call to [sqlite3_step](P) returned ** [SQLITE_ROW], except in the case of the [PRAGMA incremental_vacuum] ** where it always returns zero since each step of that multi-step ** pragma returns 0 columns of data. ** ** See also: [sqlite3_column_count()] */ SQLITE_API int sqlite3_data_count(sqlite3_stmt *pStmt); /* ** CAPI3REF: Fundamental Datatypes ** KEYWORDS: SQLITE_TEXT ** ** ^(Every value in SQLite has one of five fundamental datatypes: ** |
︙ | ︙ | |||
4126 4127 4128 4129 4130 4131 4132 | ** ** ^(If a memory allocation error occurs during the evaluation of any ** of these routines, a default value is returned. The default value ** is either the integer 0, the floating point number 0.0, or a NULL ** pointer. Subsequent calls to [sqlite3_errcode()] will return ** [SQLITE_NOMEM].)^ */ | | | | | | | | | | | | 4301 4302 4303 4304 4305 4306 4307 4308 4309 4310 4311 4312 4313 4314 4315 4316 4317 4318 4319 4320 4321 4322 4323 4324 | ** ** ^(If a memory allocation error occurs during the evaluation of any ** of these routines, a default value is returned. The default value ** is either the integer 0, the floating point number 0.0, or a NULL ** pointer. Subsequent calls to [sqlite3_errcode()] will return ** [SQLITE_NOMEM].)^ */ SQLITE_API const void *sqlite3_column_blob(sqlite3_stmt*, int iCol); SQLITE_API int sqlite3_column_bytes(sqlite3_stmt*, int iCol); SQLITE_API int sqlite3_column_bytes16(sqlite3_stmt*, int iCol); SQLITE_API double sqlite3_column_double(sqlite3_stmt*, int iCol); SQLITE_API int sqlite3_column_int(sqlite3_stmt*, int iCol); SQLITE_API sqlite3_int64 sqlite3_column_int64(sqlite3_stmt*, int iCol); SQLITE_API const unsigned char *sqlite3_column_text(sqlite3_stmt*, int iCol); SQLITE_API const void *sqlite3_column_text16(sqlite3_stmt*, int iCol); SQLITE_API int sqlite3_column_type(sqlite3_stmt*, int iCol); SQLITE_API sqlite3_value *sqlite3_column_value(sqlite3_stmt*, int iCol); /* ** CAPI3REF: Destroy A Prepared Statement Object ** DESTRUCTOR: sqlite3_stmt ** ** ^The sqlite3_finalize() function is called to delete a [prepared statement]. ** ^If the most recent evaluation of the statement encountered no errors |
︙ | ︙ | |||
4163 4164 4165 4166 4167 4168 4169 | ** ** The application must finalize every [prepared statement] in order to avoid ** resource leaks. It is a grievous error for the application to try to use ** a prepared statement after it has been finalized. Any use of a prepared ** statement after it has been finalized can result in undefined and ** undesirable behavior such as segfaults and heap corruption. */ | | | 4338 4339 4340 4341 4342 4343 4344 4345 4346 4347 4348 4349 4350 4351 4352 | ** ** The application must finalize every [prepared statement] in order to avoid ** resource leaks. It is a grievous error for the application to try to use ** a prepared statement after it has been finalized. Any use of a prepared ** statement after it has been finalized can result in undefined and ** undesirable behavior such as segfaults and heap corruption. */ SQLITE_API int sqlite3_finalize(sqlite3_stmt *pStmt); /* ** CAPI3REF: Reset A Prepared Statement Object ** METHOD: sqlite3_stmt ** ** The sqlite3_reset() function is called to reset a [prepared statement] ** object back to its initial state, ready to be re-executed. |
︙ | ︙ | |||
4190 4191 4192 4193 4194 4195 4196 | ** ^If the most recent call to [sqlite3_step(S)] for the ** [prepared statement] S indicated an error, then ** [sqlite3_reset(S)] returns an appropriate [error code]. ** ** ^The [sqlite3_reset(S)] interface does not change the values ** of any [sqlite3_bind_blob|bindings] on the [prepared statement] S. */ | | | 4365 4366 4367 4368 4369 4370 4371 4372 4373 4374 4375 4376 4377 4378 4379 | ** ^If the most recent call to [sqlite3_step(S)] for the ** [prepared statement] S indicated an error, then ** [sqlite3_reset(S)] returns an appropriate [error code]. ** ** ^The [sqlite3_reset(S)] interface does not change the values ** of any [sqlite3_bind_blob|bindings] on the [prepared statement] S. */ SQLITE_API int sqlite3_reset(sqlite3_stmt *pStmt); /* ** CAPI3REF: Create Or Redefine SQL Functions ** KEYWORDS: {function creation routines} ** KEYWORDS: {application-defined SQL function} ** KEYWORDS: {application-defined SQL functions} ** METHOD: sqlite3 |
︙ | ︙ | |||
4290 4291 4292 4293 4294 4295 4296 | ** ^Built-in functions may be overloaded by new application-defined functions. ** ** ^An application-defined function is permitted to call other ** SQLite interfaces. However, such calls must not ** close the database connection nor finalize or reset the prepared ** statement in which the function is running. */ | | | | | 4465 4466 4467 4468 4469 4470 4471 4472 4473 4474 4475 4476 4477 4478 4479 4480 4481 4482 4483 4484 4485 4486 4487 4488 4489 4490 4491 4492 4493 4494 4495 4496 4497 4498 4499 | ** ^Built-in functions may be overloaded by new application-defined functions. ** ** ^An application-defined function is permitted to call other ** SQLite interfaces. However, such calls must not ** close the database connection nor finalize or reset the prepared ** statement in which the function is running. */ SQLITE_API int sqlite3_create_function( sqlite3 *db, const char *zFunctionName, int nArg, int eTextRep, void *pApp, void (*xFunc)(sqlite3_context*,int,sqlite3_value**), void (*xStep)(sqlite3_context*,int,sqlite3_value**), void (*xFinal)(sqlite3_context*) ); SQLITE_API int sqlite3_create_function16( sqlite3 *db, const void *zFunctionName, int nArg, int eTextRep, void *pApp, void (*xFunc)(sqlite3_context*,int,sqlite3_value**), void (*xStep)(sqlite3_context*,int,sqlite3_value**), void (*xFinal)(sqlite3_context*) ); SQLITE_API int sqlite3_create_function_v2( sqlite3 *db, const char *zFunctionName, int nArg, int eTextRep, void *pApp, void (*xFunc)(sqlite3_context*,int,sqlite3_value**), void (*xStep)(sqlite3_context*,int,sqlite3_value**), |
︙ | ︙ | |||
4356 4357 4358 4359 4360 4361 4362 | ** These functions are [deprecated]. In order to maintain ** backwards compatibility with older code, these functions continue ** to be supported. However, new applications should avoid ** the use of these functions. To encourage programmers to avoid ** these functions, we will not explain what they do. */ #ifndef SQLITE_OMIT_DEPRECATED | | | | | | | | 4531 4532 4533 4534 4535 4536 4537 4538 4539 4540 4541 4542 4543 4544 4545 4546 4547 4548 4549 4550 | ** These functions are [deprecated]. In order to maintain ** backwards compatibility with older code, these functions continue ** to be supported. However, new applications should avoid ** the use of these functions. To encourage programmers to avoid ** these functions, we will not explain what they do. */ #ifndef SQLITE_OMIT_DEPRECATED SQLITE_API SQLITE_DEPRECATED int sqlite3_aggregate_count(sqlite3_context*); SQLITE_API SQLITE_DEPRECATED int sqlite3_expired(sqlite3_stmt*); SQLITE_API SQLITE_DEPRECATED int sqlite3_transfer_bindings(sqlite3_stmt*, sqlite3_stmt*); SQLITE_API SQLITE_DEPRECATED int sqlite3_global_recover(void); SQLITE_API SQLITE_DEPRECATED void sqlite3_thread_cleanup(void); SQLITE_API SQLITE_DEPRECATED int sqlite3_memory_alarm(void(*)(void*,sqlite3_int64,int), void*,sqlite3_int64); #endif /* ** CAPI3REF: Obtaining SQL Values ** METHOD: sqlite3_value ** |
︙ | ︙ | |||
4411 4412 4413 4414 4415 4416 4417 | ** [sqlite3_value_text16()] can be invalidated by a subsequent call to ** [sqlite3_value_bytes()], [sqlite3_value_bytes16()], [sqlite3_value_text()], ** or [sqlite3_value_text16()]. ** ** These routines must be called from the same thread as ** the SQL function that supplied the [sqlite3_value*] parameters. */ | | | | | | | | | | | | | | | | | 4586 4587 4588 4589 4590 4591 4592 4593 4594 4595 4596 4597 4598 4599 4600 4601 4602 4603 4604 4605 4606 4607 4608 4609 4610 4611 4612 4613 4614 4615 4616 4617 4618 4619 4620 4621 4622 4623 4624 4625 4626 4627 4628 4629 4630 4631 4632 4633 4634 4635 4636 4637 4638 4639 4640 4641 4642 4643 4644 | ** [sqlite3_value_text16()] can be invalidated by a subsequent call to ** [sqlite3_value_bytes()], [sqlite3_value_bytes16()], [sqlite3_value_text()], ** or [sqlite3_value_text16()]. ** ** These routines must be called from the same thread as ** the SQL function that supplied the [sqlite3_value*] parameters. */ SQLITE_API const void *sqlite3_value_blob(sqlite3_value*); SQLITE_API int sqlite3_value_bytes(sqlite3_value*); SQLITE_API int sqlite3_value_bytes16(sqlite3_value*); SQLITE_API double sqlite3_value_double(sqlite3_value*); SQLITE_API int sqlite3_value_int(sqlite3_value*); SQLITE_API sqlite3_int64 sqlite3_value_int64(sqlite3_value*); SQLITE_API const unsigned char *sqlite3_value_text(sqlite3_value*); SQLITE_API const void *sqlite3_value_text16(sqlite3_value*); SQLITE_API const void *sqlite3_value_text16le(sqlite3_value*); SQLITE_API const void *sqlite3_value_text16be(sqlite3_value*); SQLITE_API int sqlite3_value_type(sqlite3_value*); SQLITE_API int sqlite3_value_numeric_type(sqlite3_value*); /* ** CAPI3REF: Finding The Subtype Of SQL Values ** METHOD: sqlite3_value ** ** The sqlite3_value_subtype(V) function returns the subtype for ** an [application-defined SQL function] argument V. The subtype ** information can be used to pass a limited amount of context from ** one SQL function to another. Use the [sqlite3_result_subtype()] ** routine to set the subtype for the return value of an SQL function. ** ** SQLite makes no use of subtype itself. It merely passes the subtype ** from the result of one [application-defined SQL function] into the ** input of another. */ SQLITE_API unsigned int sqlite3_value_subtype(sqlite3_value*); /* ** CAPI3REF: Copy And Free SQL Values ** METHOD: sqlite3_value ** ** ^The sqlite3_value_dup(V) interface makes a copy of the [sqlite3_value] ** object D and returns a pointer to that copy. ^The [sqlite3_value] returned ** is a [protected sqlite3_value] object even if the input is not. ** ^The sqlite3_value_dup(V) interface returns NULL if V is NULL or if a ** memory allocation fails. ** ** ^The sqlite3_value_free(V) interface frees an [sqlite3_value] object ** previously obtained from [sqlite3_value_dup()]. ^If V is a NULL pointer ** then sqlite3_value_free(V) is a harmless no-op. */ SQLITE_API sqlite3_value *sqlite3_value_dup(const sqlite3_value*); SQLITE_API void sqlite3_value_free(sqlite3_value*); /* ** CAPI3REF: Obtain Aggregate Function Context ** METHOD: sqlite3_context ** ** Implementations of aggregate SQL functions use this ** routine to allocate memory for storing their state. |
︙ | ︙ | |||
4500 4501 4502 4503 4504 4505 4506 | ** [sqlite3_context | SQL function context] that is the first parameter ** to the xStep or xFinal callback routine that implements the aggregate ** function. ** ** This routine must be called from the same thread in which ** the aggregate SQL function is running. */ | | | | | 4675 4676 4677 4678 4679 4680 4681 4682 4683 4684 4685 4686 4687 4688 4689 4690 4691 4692 4693 4694 4695 4696 4697 4698 4699 4700 4701 4702 4703 4704 4705 4706 4707 4708 4709 4710 4711 4712 4713 4714 4715 4716 | ** [sqlite3_context | SQL function context] that is the first parameter ** to the xStep or xFinal callback routine that implements the aggregate ** function. ** ** This routine must be called from the same thread in which ** the aggregate SQL function is running. */ SQLITE_API void *sqlite3_aggregate_context(sqlite3_context*, int nBytes); /* ** CAPI3REF: User Data For Functions ** METHOD: sqlite3_context ** ** ^The sqlite3_user_data() interface returns a copy of ** the pointer that was the pUserData parameter (the 5th parameter) ** of the [sqlite3_create_function()] ** and [sqlite3_create_function16()] routines that originally ** registered the application defined function. ** ** This routine must be called from the same thread in which ** the application-defined function is running. */ SQLITE_API void *sqlite3_user_data(sqlite3_context*); /* ** CAPI3REF: Database Connection For Functions ** METHOD: sqlite3_context ** ** ^The sqlite3_context_db_handle() interface returns a copy of ** the pointer to the [database connection] (the 1st parameter) ** of the [sqlite3_create_function()] ** and [sqlite3_create_function16()] routines that originally ** registered the application defined function. */ SQLITE_API sqlite3 *sqlite3_context_db_handle(sqlite3_context*); /* ** CAPI3REF: Function Auxiliary Data ** METHOD: sqlite3_context ** ** These functions may be used by (non-aggregate) SQL functions to ** associate metadata with argument values. If the same value is passed to |
︙ | ︙ | |||
4559 4560 4561 4562 4563 4564 4565 | ** calls to sqlite3_get_auxdata(C,N) return P from the most recent ** sqlite3_set_auxdata(C,N,P,X) call if the metadata is still valid or ** NULL if the metadata has been discarded. ** ^After each call to sqlite3_set_auxdata(C,N,P,X) where X is not NULL, ** SQLite will invoke the destructor function X with parameter P exactly ** once, when the metadata is discarded. ** SQLite is free to discard the metadata at any time, including: <ul> | | | | | > | | | | | 4734 4735 4736 4737 4738 4739 4740 4741 4742 4743 4744 4745 4746 4747 4748 4749 4750 4751 4752 4753 4754 4755 4756 4757 4758 4759 4760 4761 4762 4763 4764 4765 4766 4767 4768 4769 4770 4771 | ** calls to sqlite3_get_auxdata(C,N) return P from the most recent ** sqlite3_set_auxdata(C,N,P,X) call if the metadata is still valid or ** NULL if the metadata has been discarded. ** ^After each call to sqlite3_set_auxdata(C,N,P,X) where X is not NULL, ** SQLite will invoke the destructor function X with parameter P exactly ** once, when the metadata is discarded. ** SQLite is free to discard the metadata at any time, including: <ul> ** <li> ^(when the corresponding function parameter changes)^, or ** <li> ^(when [sqlite3_reset()] or [sqlite3_finalize()] is called for the ** SQL statement)^, or ** <li> ^(when sqlite3_set_auxdata() is invoked again on the same ** parameter)^, or ** <li> ^(during the original sqlite3_set_auxdata() call when a memory ** allocation error occurs.)^ </ul> ** ** Note the last bullet in particular. The destructor X in ** sqlite3_set_auxdata(C,N,P,X) might be called immediately, before the ** sqlite3_set_auxdata() interface even returns. Hence sqlite3_set_auxdata() ** should be called near the end of the function implementation and the ** function implementation should not make any use of P after ** sqlite3_set_auxdata() has been called. ** ** ^(In practice, metadata is preserved between function calls for ** function parameters that are compile-time constants, including literal ** values and [parameters] and expressions composed from the same.)^ ** ** These routines must be called from the same thread in which ** the SQL function is running. */ SQLITE_API void *sqlite3_get_auxdata(sqlite3_context*, int N); SQLITE_API void sqlite3_set_auxdata(sqlite3_context*, int N, void*, void (*)(void*)); /* ** CAPI3REF: Constants Defining Special Destructor Behavior ** ** These are special values for the destructor that is passed in as the ** final argument to routines like [sqlite3_result_blob()]. ^If the destructor |
︙ | ︙ | |||
4717 4718 4719 4720 4721 4722 4723 | ** [unprotected sqlite3_value] object is required, so either ** kind of [sqlite3_value] object can be used with this interface. ** ** If these routines are called from within the different thread ** than the one containing the application-defined function that received ** the [sqlite3_context] pointer, the results are undefined. */ | | | | | | | | | | | | | | | | | | | | | | 4893 4894 4895 4896 4897 4898 4899 4900 4901 4902 4903 4904 4905 4906 4907 4908 4909 4910 4911 4912 4913 4914 4915 4916 4917 4918 4919 4920 4921 4922 4923 4924 4925 4926 4927 4928 4929 4930 4931 4932 4933 4934 4935 4936 4937 4938 4939 4940 4941 4942 | ** [unprotected sqlite3_value] object is required, so either ** kind of [sqlite3_value] object can be used with this interface. ** ** If these routines are called from within the different thread ** than the one containing the application-defined function that received ** the [sqlite3_context] pointer, the results are undefined. */ SQLITE_API void sqlite3_result_blob(sqlite3_context*, const void*, int, void(*)(void*)); SQLITE_API void sqlite3_result_blob64(sqlite3_context*,const void*, sqlite3_uint64,void(*)(void*)); SQLITE_API void sqlite3_result_double(sqlite3_context*, double); SQLITE_API void sqlite3_result_error(sqlite3_context*, const char*, int); SQLITE_API void sqlite3_result_error16(sqlite3_context*, const void*, int); SQLITE_API void sqlite3_result_error_toobig(sqlite3_context*); SQLITE_API void sqlite3_result_error_nomem(sqlite3_context*); SQLITE_API void sqlite3_result_error_code(sqlite3_context*, int); SQLITE_API void sqlite3_result_int(sqlite3_context*, int); SQLITE_API void sqlite3_result_int64(sqlite3_context*, sqlite3_int64); SQLITE_API void sqlite3_result_null(sqlite3_context*); SQLITE_API void sqlite3_result_text(sqlite3_context*, const char*, int, void(*)(void*)); SQLITE_API void sqlite3_result_text64(sqlite3_context*, const char*,sqlite3_uint64, void(*)(void*), unsigned char encoding); SQLITE_API void sqlite3_result_text16(sqlite3_context*, const void*, int, void(*)(void*)); SQLITE_API void sqlite3_result_text16le(sqlite3_context*, const void*, int,void(*)(void*)); SQLITE_API void sqlite3_result_text16be(sqlite3_context*, const void*, int,void(*)(void*)); SQLITE_API void sqlite3_result_value(sqlite3_context*, sqlite3_value*); SQLITE_API void sqlite3_result_zeroblob(sqlite3_context*, int n); SQLITE_API int sqlite3_result_zeroblob64(sqlite3_context*, sqlite3_uint64 n); /* ** CAPI3REF: Setting The Subtype Of An SQL Function ** METHOD: sqlite3_context ** ** The sqlite3_result_subtype(C,T) function causes the subtype of ** the result from the [application-defined SQL function] with ** [sqlite3_context] C to be the value T. Only the lower 8 bits ** of the subtype T are preserved in current versions of SQLite; ** higher order bits are discarded. ** The number of subtype bytes preserved by SQLite might increase ** in future releases of SQLite. */ SQLITE_API void sqlite3_result_subtype(sqlite3_context*,unsigned int); /* ** CAPI3REF: Define New Collating Sequences ** METHOD: sqlite3 ** ** ^These functions add, remove, or modify a [collation] associated ** with the [database connection] specified as the first argument. |
︙ | ︙ | |||
4834 4835 4836 4837 4838 4839 4840 | ** themselves rather than expecting SQLite to deal with it for them. ** This is different from every other SQLite interface. The inconsistency ** is unfortunate but cannot be changed without breaking backwards ** compatibility. ** ** See also: [sqlite3_collation_needed()] and [sqlite3_collation_needed16()]. */ | | | | | 5010 5011 5012 5013 5014 5015 5016 5017 5018 5019 5020 5021 5022 5023 5024 5025 5026 5027 5028 5029 5030 5031 5032 5033 5034 5035 5036 5037 5038 5039 | ** themselves rather than expecting SQLite to deal with it for them. ** This is different from every other SQLite interface. The inconsistency ** is unfortunate but cannot be changed without breaking backwards ** compatibility. ** ** See also: [sqlite3_collation_needed()] and [sqlite3_collation_needed16()]. */ SQLITE_API int sqlite3_create_collation( sqlite3*, const char *zName, int eTextRep, void *pArg, int(*xCompare)(void*,int,const void*,int,const void*) ); SQLITE_API int sqlite3_create_collation_v2( sqlite3*, const char *zName, int eTextRep, void *pArg, int(*xCompare)(void*,int,const void*,int,const void*), void(*xDestroy)(void*) ); SQLITE_API int sqlite3_create_collation16( sqlite3*, const void *zName, int eTextRep, void *pArg, int(*xCompare)(void*,int,const void*,int,const void*) ); |
︙ | ︙ | |||
4884 4885 4886 4887 4888 4889 4890 | ** sequence function required. The fourth parameter is the name of the ** required collation sequence.)^ ** ** The callback function should register the desired collation using ** [sqlite3_create_collation()], [sqlite3_create_collation16()], or ** [sqlite3_create_collation_v2()]. */ | | | | | | | | | | 5060 5061 5062 5063 5064 5065 5066 5067 5068 5069 5070 5071 5072 5073 5074 5075 5076 5077 5078 5079 5080 5081 5082 5083 5084 5085 5086 5087 5088 5089 5090 5091 5092 5093 5094 5095 5096 5097 5098 5099 5100 5101 5102 5103 5104 5105 5106 5107 5108 5109 5110 5111 5112 5113 5114 5115 5116 5117 5118 5119 5120 5121 5122 5123 5124 5125 5126 5127 5128 5129 5130 5131 5132 5133 5134 5135 | ** sequence function required. The fourth parameter is the name of the ** required collation sequence.)^ ** ** The callback function should register the desired collation using ** [sqlite3_create_collation()], [sqlite3_create_collation16()], or ** [sqlite3_create_collation_v2()]. */ SQLITE_API int sqlite3_collation_needed( sqlite3*, void*, void(*)(void*,sqlite3*,int eTextRep,const char*) ); SQLITE_API int sqlite3_collation_needed16( sqlite3*, void*, void(*)(void*,sqlite3*,int eTextRep,const void*) ); #ifdef SQLITE_HAS_CODEC /* ** Specify the key for an encrypted database. This routine should be ** called right after sqlite3_open(). ** ** The code to implement this API is not available in the public release ** of SQLite. */ SQLITE_API int sqlite3_key( sqlite3 *db, /* Database to be rekeyed */ const void *pKey, int nKey /* The key */ ); SQLITE_API int sqlite3_key_v2( sqlite3 *db, /* Database to be rekeyed */ const char *zDbName, /* Name of the database */ const void *pKey, int nKey /* The key */ ); /* ** Change the key on an open database. If the current database is not ** encrypted, this routine will encrypt it. If pNew==0 or nNew==0, the ** database is decrypted. ** ** The code to implement this API is not available in the public release ** of SQLite. */ SQLITE_API int sqlite3_rekey( sqlite3 *db, /* Database to be rekeyed */ const void *pKey, int nKey /* The new key */ ); SQLITE_API int sqlite3_rekey_v2( sqlite3 *db, /* Database to be rekeyed */ const char *zDbName, /* Name of the database */ const void *pKey, int nKey /* The new key */ ); /* ** Specify the activation key for a SEE database. Unless ** activated, none of the SEE routines will work. */ SQLITE_API void sqlite3_activate_see( const char *zPassPhrase /* Activation phrase */ ); #endif #ifdef SQLITE_ENABLE_CEROD /* ** Specify the activation key for a CEROD database. Unless ** activated, none of the CEROD routines will work. */ SQLITE_API void sqlite3_activate_cerod( const char *zPassPhrase /* Activation phrase */ ); #endif /* ** CAPI3REF: Suspend Execution For A Short Time ** |
︙ | ︙ | |||
4967 4968 4969 4970 4971 4972 4973 | ** ** ^SQLite implements this interface by calling the xSleep() ** method of the default [sqlite3_vfs] object. If the xSleep() method ** of the default VFS is not implemented correctly, or not implemented at ** all, then the behavior of sqlite3_sleep() may deviate from the description ** in the previous paragraphs. */ | | | 5143 5144 5145 5146 5147 5148 5149 5150 5151 5152 5153 5154 5155 5156 5157 | ** ** ^SQLite implements this interface by calling the xSleep() ** method of the default [sqlite3_vfs] object. If the xSleep() method ** of the default VFS is not implemented correctly, or not implemented at ** all, then the behavior of sqlite3_sleep() may deviate from the description ** in the previous paragraphs. */ SQLITE_API int sqlite3_sleep(int); /* ** CAPI3REF: Name Of The Folder Holding Temporary Files ** ** ^(If this global variable is made to point to a string which is ** the name of a folder (a.k.a. directory), then all temporary files ** created by SQLite when using a built-in [sqlite3_vfs | VFS] |
︙ | ︙ | |||
5086 5087 5088 5089 5090 5091 5092 | ** find out whether SQLite automatically rolled back the transaction after ** an error is to use this function. ** ** If another thread changes the autocommit status of the database ** connection while this routine is running, then the return value ** is undefined. */ | | | | | | | 5262 5263 5264 5265 5266 5267 5268 5269 5270 5271 5272 5273 5274 5275 5276 5277 5278 5279 5280 5281 5282 5283 5284 5285 5286 5287 5288 5289 5290 5291 5292 5293 5294 5295 5296 5297 5298 5299 5300 5301 5302 5303 5304 5305 5306 5307 5308 5309 5310 5311 5312 5313 5314 5315 5316 5317 5318 5319 5320 5321 5322 5323 5324 5325 5326 5327 5328 5329 5330 5331 5332 | ** find out whether SQLite automatically rolled back the transaction after ** an error is to use this function. ** ** If another thread changes the autocommit status of the database ** connection while this routine is running, then the return value ** is undefined. */ SQLITE_API int sqlite3_get_autocommit(sqlite3*); /* ** CAPI3REF: Find The Database Handle Of A Prepared Statement ** METHOD: sqlite3_stmt ** ** ^The sqlite3_db_handle interface returns the [database connection] handle ** to which a [prepared statement] belongs. ^The [database connection] ** returned by sqlite3_db_handle is the same [database connection] ** that was the first argument ** to the [sqlite3_prepare_v2()] call (or its variants) that was used to ** create the statement in the first place. */ SQLITE_API sqlite3 *sqlite3_db_handle(sqlite3_stmt*); /* ** CAPI3REF: Return The Filename For A Database Connection ** METHOD: sqlite3 ** ** ^The sqlite3_db_filename(D,N) interface returns a pointer to a filename ** associated with database N of connection D. ^The main database file ** has the name "main". If there is no attached database N on the database ** connection D, or if database N is a temporary or in-memory database, then ** a NULL pointer is returned. ** ** ^The filename returned by this function is the output of the ** xFullPathname method of the [VFS]. ^In other words, the filename ** will be an absolute pathname, even if the filename used ** to open the database originally was a URI or relative pathname. */ SQLITE_API const char *sqlite3_db_filename(sqlite3 *db, const char *zDbName); /* ** CAPI3REF: Determine if a database is read-only ** METHOD: sqlite3 ** ** ^The sqlite3_db_readonly(D,N) interface returns 1 if the database N ** of connection D is read-only, 0 if it is read/write, or -1 if N is not ** the name of a database on connection D. */ SQLITE_API int sqlite3_db_readonly(sqlite3 *db, const char *zDbName); /* ** CAPI3REF: Find the next prepared statement ** METHOD: sqlite3 ** ** ^This interface returns a pointer to the next [prepared statement] after ** pStmt associated with the [database connection] pDb. ^If pStmt is NULL ** then this interface returns a pointer to the first prepared statement ** associated with the database connection pDb. ^If no prepared statement ** satisfies the conditions of this routine, it returns NULL. ** ** The [database connection] pointer D in a call to ** [sqlite3_next_stmt(D,S)] must refer to an open database ** connection and in particular must not be a NULL pointer. */ SQLITE_API sqlite3_stmt *sqlite3_next_stmt(sqlite3 *pDb, sqlite3_stmt *pStmt); /* ** CAPI3REF: Commit And Rollback Notification Callbacks ** METHOD: sqlite3 ** ** ^The sqlite3_commit_hook() interface registers a callback ** function to be invoked whenever a transaction is [COMMIT | committed]. |
︙ | ︙ | |||
5191 5192 5193 5194 5195 5196 5197 | ** rolled back if an explicit "ROLLBACK" statement is executed, or ** an error or constraint causes an implicit rollback to occur. ** ^The rollback callback is not invoked if a transaction is ** automatically rolled back because the database connection is closed. ** ** See also the [sqlite3_update_hook()] interface. */ | | | | 5367 5368 5369 5370 5371 5372 5373 5374 5375 5376 5377 5378 5379 5380 5381 5382 | ** rolled back if an explicit "ROLLBACK" statement is executed, or ** an error or constraint causes an implicit rollback to occur. ** ^The rollback callback is not invoked if a transaction is ** automatically rolled back because the database connection is closed. ** ** See also the [sqlite3_update_hook()] interface. */ SQLITE_API void *sqlite3_commit_hook(sqlite3*, int(*)(void*), void*); SQLITE_API void *sqlite3_rollback_hook(sqlite3*, void(*)(void *), void*); /* ** CAPI3REF: Data Change Notification Callbacks ** METHOD: sqlite3 ** ** ^The sqlite3_update_hook() interface registers a callback function ** with the [database connection] identified by the first argument |
︙ | ︙ | |||
5243 5244 5245 5246 5247 5248 5249 | ** returns the P argument from the previous call ** on the same [database connection] D, or NULL for ** the first call on D. ** ** See also the [sqlite3_commit_hook()], [sqlite3_rollback_hook()], ** and [sqlite3_preupdate_hook()] interfaces. */ | | > | | 5419 5420 5421 5422 5423 5424 5425 5426 5427 5428 5429 5430 5431 5432 5433 5434 5435 5436 5437 5438 5439 5440 5441 5442 5443 5444 5445 5446 5447 5448 5449 | ** returns the P argument from the previous call ** on the same [database connection] D, or NULL for ** the first call on D. ** ** See also the [sqlite3_commit_hook()], [sqlite3_rollback_hook()], ** and [sqlite3_preupdate_hook()] interfaces. */ SQLITE_API void *sqlite3_update_hook( sqlite3*, void(*)(void *,int ,char const *,char const *,sqlite3_int64), void* ); /* ** CAPI3REF: Enable Or Disable Shared Pager Cache ** ** ^(This routine enables or disables the sharing of the database cache ** and schema data structures between [database connection | connections] ** to the same database. Sharing is enabled if the argument is true ** and disabled if the argument is false.)^ ** ** ^Cache sharing is enabled and disabled for an entire process. ** This is a change as of SQLite [version 3.5.0] ([dateof:3.5.0]). ** In prior versions of SQLite, ** sharing was enabled or disabled for each thread separately. ** ** ^(The cache sharing mode set by this interface effects all subsequent ** calls to [sqlite3_open()], [sqlite3_open_v2()], and [sqlite3_open16()]. ** Existing database connections continue use the sharing mode ** that was in effect at the time they were opened.)^ ** |
︙ | ︙ | |||
5283 5284 5285 5286 5287 5288 5289 | ** [sqlite3_open_v2()] with [SQLITE_OPEN_SHAREDCACHE]. ** ** This interface is threadsafe on processors where writing a ** 32-bit integer is atomic. ** ** See Also: [SQLite Shared-Cache Mode] */ | | | | | 5460 5461 5462 5463 5464 5465 5466 5467 5468 5469 5470 5471 5472 5473 5474 5475 5476 5477 5478 5479 5480 5481 5482 5483 5484 5485 5486 5487 5488 5489 5490 5491 5492 5493 5494 5495 5496 5497 5498 5499 5500 5501 5502 5503 5504 | ** [sqlite3_open_v2()] with [SQLITE_OPEN_SHAREDCACHE]. ** ** This interface is threadsafe on processors where writing a ** 32-bit integer is atomic. ** ** See Also: [SQLite Shared-Cache Mode] */ SQLITE_API int sqlite3_enable_shared_cache(int); /* ** CAPI3REF: Attempt To Free Heap Memory ** ** ^The sqlite3_release_memory() interface attempts to free N bytes ** of heap memory by deallocating non-essential memory allocations ** held by the database library. Memory used to cache database ** pages to improve performance is an example of non-essential memory. ** ^sqlite3_release_memory() returns the number of bytes actually freed, ** which might be more or less than the amount requested. ** ^The sqlite3_release_memory() routine is a no-op returning zero ** if SQLite is not compiled with [SQLITE_ENABLE_MEMORY_MANAGEMENT]. ** ** See also: [sqlite3_db_release_memory()] */ SQLITE_API int sqlite3_release_memory(int); /* ** CAPI3REF: Free Memory Used By A Database Connection ** METHOD: sqlite3 ** ** ^The sqlite3_db_release_memory(D) interface attempts to free as much heap ** memory as possible from database connection D. Unlike the ** [sqlite3_release_memory()] interface, this interface is in effect even ** when the [SQLITE_ENABLE_MEMORY_MANAGEMENT] compile-time option is ** omitted. ** ** See also: [sqlite3_release_memory()] */ SQLITE_API int sqlite3_db_release_memory(sqlite3*); /* ** CAPI3REF: Impose A Limit On Heap Size ** ** ^The sqlite3_soft_heap_limit64() interface sets and/or queries the ** soft limit on the amount of heap memory that may be allocated by SQLite. ** ^SQLite strives to keep heap memory utilization below the soft heap |
︙ | ︙ | |||
5352 5353 5354 5355 5356 5357 5358 | ** <li> An alternative page cache implementation is specified using ** [sqlite3_config]([SQLITE_CONFIG_PCACHE2],...). ** <li> The page cache allocates from its own memory pool supplied ** by [sqlite3_config]([SQLITE_CONFIG_PAGECACHE],...) rather than ** from the heap. ** </ul>)^ ** | > | | | | | 5529 5530 5531 5532 5533 5534 5535 5536 5537 5538 5539 5540 5541 5542 5543 5544 5545 5546 5547 5548 5549 5550 5551 5552 5553 5554 5555 5556 5557 5558 5559 5560 5561 5562 5563 5564 5565 5566 5567 5568 5569 5570 5571 5572 5573 5574 5575 5576 5577 5578 5579 5580 5581 5582 5583 | ** <li> An alternative page cache implementation is specified using ** [sqlite3_config]([SQLITE_CONFIG_PCACHE2],...). ** <li> The page cache allocates from its own memory pool supplied ** by [sqlite3_config]([SQLITE_CONFIG_PAGECACHE],...) rather than ** from the heap. ** </ul>)^ ** ** Beginning with SQLite [version 3.7.3] ([dateof:3.7.3]), ** the soft heap limit is enforced ** regardless of whether or not the [SQLITE_ENABLE_MEMORY_MANAGEMENT] ** compile-time option is invoked. With [SQLITE_ENABLE_MEMORY_MANAGEMENT], ** the soft heap limit is enforced on every memory allocation. Without ** [SQLITE_ENABLE_MEMORY_MANAGEMENT], the soft heap limit is only enforced ** when memory is allocated by the page cache. Testing suggests that because ** the page cache is the predominate memory user in SQLite, most ** applications will achieve adequate soft heap limit enforcement without ** the use of [SQLITE_ENABLE_MEMORY_MANAGEMENT]. ** ** The circumstances under which SQLite will enforce the soft heap limit may ** changes in future releases of SQLite. */ SQLITE_API sqlite3_int64 sqlite3_soft_heap_limit64(sqlite3_int64 N); /* ** CAPI3REF: Deprecated Soft Heap Limit Interface ** DEPRECATED ** ** This is a deprecated version of the [sqlite3_soft_heap_limit64()] ** interface. This routine is provided for historical compatibility ** only. All new applications should use the ** [sqlite3_soft_heap_limit64()] interface rather than this one. */ SQLITE_API SQLITE_DEPRECATED void sqlite3_soft_heap_limit(int N); /* ** CAPI3REF: Extract Metadata About A Column Of A Table ** METHOD: sqlite3 ** ** ^(The sqlite3_table_column_metadata(X,D,T,C,....) routine returns ** information about column C of table T in database D ** on [database connection] X.)^ ^The sqlite3_table_column_metadata() ** interface returns SQLITE_OK and fills in the non-NULL pointers in ** the final five arguments with appropriate values if the specified ** column exists. ^The sqlite3_table_column_metadata() interface returns ** SQLITE_ERROR and if the specified column does not exist. ** ^If the column-name parameter to sqlite3_table_column_metadata() is a ** NULL pointer, then this routine simply checks for the existence of the ** table and returns SQLITE_OK if the table exists and SQLITE_ERROR if it ** does not. ** ** ^The column is identified by the second, third and fourth parameters to ** this function. ^(The second parameter is either the name of the database ** (i.e. "main", "temp", or an attached database) containing the specified ** table or NULL.)^ ^If it is NULL, then all attached databases are searched |
︙ | ︙ | |||
5446 5447 5448 5449 5450 5451 5452 | ** auto increment: 0 ** </pre>)^ ** ** ^This function causes all database schemas to be read from disk and ** parsed, if that has not already been done, and returns an error if ** any errors are encountered while loading the schema. */ | | | 5624 5625 5626 5627 5628 5629 5630 5631 5632 5633 5634 5635 5636 5637 5638 | ** auto increment: 0 ** </pre>)^ ** ** ^This function causes all database schemas to be read from disk and ** parsed, if that has not already been done, and returns an error if ** any errors are encountered while loading the schema. */ SQLITE_API int sqlite3_table_column_metadata( sqlite3 *db, /* Connection handle */ const char *zDbName, /* Database name or NULL */ const char *zTableName, /* Table name */ const char *zColumnName, /* Column name */ char const **pzDataType, /* OUTPUT: Declared data type */ char const **pzCollSeq, /* OUTPUT: Collation sequence name */ int *pNotNull, /* OUTPUT: True if NOT NULL constraint exists */ |
︙ | ︙ | |||
5502 5503 5504 5505 5506 5507 5508 | ** interface. The use of the [sqlite3_enable_load_extension()] interface ** should be avoided. This will keep the SQL function [load_extension()] ** disabled and prevent SQL injections from giving attackers ** access to extension loading capabilities. ** ** See also the [load_extension() SQL function]. */ | | | 5680 5681 5682 5683 5684 5685 5686 5687 5688 5689 5690 5691 5692 5693 5694 | ** interface. The use of the [sqlite3_enable_load_extension()] interface ** should be avoided. This will keep the SQL function [load_extension()] ** disabled and prevent SQL injections from giving attackers ** access to extension loading capabilities. ** ** See also the [load_extension() SQL function]. */ SQLITE_API int sqlite3_load_extension( sqlite3 *db, /* Load the extension into this database connection */ const char *zFile, /* Name of the shared library containing extension */ const char *zProc, /* Entry point. Derived from zFile if 0 */ char **pzErrMsg /* Put error message here if not 0 */ ); /* |
︙ | ︙ | |||
5525 5526 5527 5528 5529 5530 5531 | ** ^Extension loading is off by default. ** ^Call the sqlite3_enable_load_extension() routine with onoff==1 ** to turn extension loading on and call it with onoff==0 to turn ** it back off again. ** ** ^This interface enables or disables both the C-API ** [sqlite3_load_extension()] and the SQL function [load_extension()]. | | | | | | 5703 5704 5705 5706 5707 5708 5709 5710 5711 5712 5713 5714 5715 5716 5717 5718 5719 5720 5721 5722 5723 5724 5725 5726 5727 5728 5729 5730 5731 5732 5733 5734 5735 5736 5737 5738 | ** ^Extension loading is off by default. ** ^Call the sqlite3_enable_load_extension() routine with onoff==1 ** to turn extension loading on and call it with onoff==0 to turn ** it back off again. ** ** ^This interface enables or disables both the C-API ** [sqlite3_load_extension()] and the SQL function [load_extension()]. ** ^(Use [sqlite3_db_config](db,[SQLITE_DBCONFIG_ENABLE_LOAD_EXTENSION],..) ** to enable or disable only the C-API.)^ ** ** <b>Security warning:</b> It is recommended that extension loading ** be disabled using the [SQLITE_DBCONFIG_ENABLE_LOAD_EXTENSION] method ** rather than this interface, so the [load_extension()] SQL function ** remains disabled. This will prevent SQL injections from giving attackers ** access to extension loading capabilities. */ SQLITE_API int sqlite3_enable_load_extension(sqlite3 *db, int onoff); /* ** CAPI3REF: Automatically Load Statically Linked Extensions ** ** ^This interface causes the xEntryPoint() function to be invoked for ** each new [database connection] that is created. The idea here is that ** xEntryPoint() is the entry point for a statically linked [SQLite extension] ** that is to be automatically loaded into all new database connections. ** ** ^(Even though the function prototype shows that xEntryPoint() takes ** no arguments and returns void, SQLite invokes xEntryPoint() with three ** arguments and expects an integer result as if the signature of the ** entry point where as follows: ** ** <blockquote><pre> ** int xEntryPoint( ** sqlite3 *db, ** const char **pzErrMsg, ** const struct sqlite3_api_routines *pThunk |
︙ | ︙ | |||
5572 5573 5574 5575 5576 5577 5578 | ** ^Calling sqlite3_auto_extension(X) with an entry point X that is already ** on the list of automatic extensions is a harmless no-op. ^No entry point ** will be called more than once for each database connection that is opened. ** ** See also: [sqlite3_reset_auto_extension()] ** and [sqlite3_cancel_auto_extension()] */ | | | | | 5750 5751 5752 5753 5754 5755 5756 5757 5758 5759 5760 5761 5762 5763 5764 5765 5766 5767 5768 5769 5770 5771 5772 5773 5774 5775 5776 5777 5778 5779 5780 5781 5782 5783 5784 | ** ^Calling sqlite3_auto_extension(X) with an entry point X that is already ** on the list of automatic extensions is a harmless no-op. ^No entry point ** will be called more than once for each database connection that is opened. ** ** See also: [sqlite3_reset_auto_extension()] ** and [sqlite3_cancel_auto_extension()] */ SQLITE_API int sqlite3_auto_extension(void(*xEntryPoint)(void)); /* ** CAPI3REF: Cancel Automatic Extension Loading ** ** ^The [sqlite3_cancel_auto_extension(X)] interface unregisters the ** initialization routine X that was registered using a prior call to ** [sqlite3_auto_extension(X)]. ^The [sqlite3_cancel_auto_extension(X)] ** routine returns 1 if initialization routine X was successfully ** unregistered and it returns 0 if X was not on the list of initialization ** routines. */ SQLITE_API int sqlite3_cancel_auto_extension(void(*xEntryPoint)(void)); /* ** CAPI3REF: Reset Automatic Extension Loading ** ** ^This interface disables all automatic extensions previously ** registered using [sqlite3_auto_extension()]. */ SQLITE_API void sqlite3_reset_auto_extension(void); /* ** The interface to the virtual-table mechanism is currently considered ** to be experimental. The interface might change in incompatible ways. ** If this is a problem for you, do not use the interface at this time. ** ** When the virtual-table mechanism stabilizes, we will declare the |
︙ | ︙ | |||
5746 5747 5748 5749 5750 5751 5752 | ** any database changes. In other words, if the xUpdate() returns ** SQLITE_CONSTRAINT, the database contents must be exactly as they were ** before xUpdate was called. By contrast, if SQLITE_INDEX_SCAN_UNIQUE is not ** set and xUpdate returns SQLITE_CONSTRAINT, any database changes made by ** the xUpdate method are automatically rolled back by SQLite. ** ** IMPORTANT: The estimatedRows field was added to the sqlite3_index_info | > | > | | 5924 5925 5926 5927 5928 5929 5930 5931 5932 5933 5934 5935 5936 5937 5938 5939 5940 5941 5942 5943 5944 5945 5946 | ** any database changes. In other words, if the xUpdate() returns ** SQLITE_CONSTRAINT, the database contents must be exactly as they were ** before xUpdate was called. By contrast, if SQLITE_INDEX_SCAN_UNIQUE is not ** set and xUpdate returns SQLITE_CONSTRAINT, any database changes made by ** the xUpdate method are automatically rolled back by SQLite. ** ** IMPORTANT: The estimatedRows field was added to the sqlite3_index_info ** structure for SQLite [version 3.8.2] ([dateof:3.8.2]). ** If a virtual table extension is ** used with an SQLite version earlier than 3.8.2, the results of attempting ** to read or write the estimatedRows field are undefined (but are likely ** to included crashing the application). The estimatedRows field should ** therefore only be used if [sqlite3_libversion_number()] returns a ** value greater than or equal to 3008002. Similarly, the idxFlags field ** was added for [version 3.9.0] ([dateof:3.9.0]). ** It may therefore only be used if ** sqlite3_libversion_number() returns a value greater than or equal to ** 3009000. */ struct sqlite3_index_info { /* Inputs */ int nConstraint; /* Number of entries in aConstraint */ struct sqlite3_index_constraint { |
︙ | ︙ | |||
5837 5838 5839 5840 5841 5842 5843 | ** invoke the destructor function (if it is not NULL) when SQLite ** no longer needs the pClientData pointer. ^The destructor will also ** be invoked if the call to sqlite3_create_module_v2() fails. ** ^The sqlite3_create_module() ** interface is equivalent to sqlite3_create_module_v2() with a NULL ** destructor. */ | | | | 6017 6018 6019 6020 6021 6022 6023 6024 6025 6026 6027 6028 6029 6030 6031 6032 6033 6034 6035 6036 6037 | ** invoke the destructor function (if it is not NULL) when SQLite ** no longer needs the pClientData pointer. ^The destructor will also ** be invoked if the call to sqlite3_create_module_v2() fails. ** ^The sqlite3_create_module() ** interface is equivalent to sqlite3_create_module_v2() with a NULL ** destructor. */ SQLITE_API int sqlite3_create_module( sqlite3 *db, /* SQLite connection to register module with */ const char *zName, /* Name of the module */ const sqlite3_module *p, /* Methods for the module */ void *pClientData /* Client data for xCreate/xConnect */ ); SQLITE_API int sqlite3_create_module_v2( sqlite3 *db, /* SQLite connection to register module with */ const char *zName, /* Name of the module */ const sqlite3_module *p, /* Methods for the module */ void *pClientData, /* Client data for xCreate/xConnect */ void(*xDestroy)(void*) /* Module destructor function */ ); |
︙ | ︙ | |||
5906 5907 5908 5909 5910 5911 5912 | ** CAPI3REF: Declare The Schema Of A Virtual Table ** ** ^The [xCreate] and [xConnect] methods of a ** [virtual table module] call this interface ** to declare the format (the names and datatypes of the columns) of ** the virtual tables they implement. */ | | | | 6086 6087 6088 6089 6090 6091 6092 6093 6094 6095 6096 6097 6098 6099 6100 6101 6102 6103 6104 6105 6106 6107 6108 6109 6110 6111 6112 6113 6114 6115 6116 6117 6118 6119 | ** CAPI3REF: Declare The Schema Of A Virtual Table ** ** ^The [xCreate] and [xConnect] methods of a ** [virtual table module] call this interface ** to declare the format (the names and datatypes of the columns) of ** the virtual tables they implement. */ SQLITE_API int sqlite3_declare_vtab(sqlite3*, const char *zSQL); /* ** CAPI3REF: Overload A Function For A Virtual Table ** METHOD: sqlite3 ** ** ^(Virtual tables can provide alternative implementations of functions ** using the [xFindFunction] method of the [virtual table module]. ** But global versions of those functions ** must exist in order to be overloaded.)^ ** ** ^(This API makes sure a global version of a function with a particular ** name and number of parameters exists. If no such function exists ** before this API is called, a new function is created.)^ ^The implementation ** of the new function always causes an exception to be thrown. So ** the new function is not good for anything by itself. Its only ** purpose is to be a placeholder function that can be overloaded ** by a [virtual table]. */ SQLITE_API int sqlite3_overload_function(sqlite3*, const char *zFuncName, int nArg); /* ** The interface to the virtual-table mechanism defined above (back up ** to a comment remarkably similar to this one) is currently considered ** to be experimental. The interface might change in incompatible ways. ** If this is a problem for you, do not use the interface at this time. ** |
︙ | ︙ | |||
6024 6025 6026 6027 6028 6029 6030 | ** ^The [sqlite3_bind_zeroblob()] and [sqlite3_result_zeroblob()] interfaces ** and the built-in [zeroblob] SQL function may be used to create a ** zero-filled blob to read or write using the incremental-blob interface. ** ** To avoid a resource leak, every open [BLOB handle] should eventually ** be released by a call to [sqlite3_blob_close()]. */ | | | 6204 6205 6206 6207 6208 6209 6210 6211 6212 6213 6214 6215 6216 6217 6218 | ** ^The [sqlite3_bind_zeroblob()] and [sqlite3_result_zeroblob()] interfaces ** and the built-in [zeroblob] SQL function may be used to create a ** zero-filled blob to read or write using the incremental-blob interface. ** ** To avoid a resource leak, every open [BLOB handle] should eventually ** be released by a call to [sqlite3_blob_close()]. */ SQLITE_API int sqlite3_blob_open( sqlite3*, const char *zDb, const char *zTable, const char *zColumn, sqlite3_int64 iRow, int flags, sqlite3_blob **ppBlob |
︙ | ︙ | |||
6057 6058 6059 6060 6061 6062 6063 | ** ^All subsequent calls to [sqlite3_blob_read()], [sqlite3_blob_write()] or ** [sqlite3_blob_reopen()] on an aborted blob handle immediately return ** SQLITE_ABORT. ^Calling [sqlite3_blob_bytes()] on an aborted blob handle ** always returns zero. ** ** ^This function sets the database handle error code and message. */ | | | 6237 6238 6239 6240 6241 6242 6243 6244 6245 6246 6247 6248 6249 6250 6251 | ** ^All subsequent calls to [sqlite3_blob_read()], [sqlite3_blob_write()] or ** [sqlite3_blob_reopen()] on an aborted blob handle immediately return ** SQLITE_ABORT. ^Calling [sqlite3_blob_bytes()] on an aborted blob handle ** always returns zero. ** ** ^This function sets the database handle error code and message. */ SQLITE_API int sqlite3_blob_reopen(sqlite3_blob *, sqlite3_int64); /* ** CAPI3REF: Close A BLOB Handle ** DESTRUCTOR: sqlite3_blob ** ** ^This function closes an open [BLOB handle]. ^(The BLOB handle is closed ** unconditionally. Even if this routine returns an error code, the |
︙ | ︙ | |||
6080 6081 6082 6083 6084 6085 6086 | ** Calling this function with an argument that is not a NULL pointer or an ** open blob handle results in undefined behaviour. ^Calling this routine ** with a null pointer (such as would be returned by a failed call to ** [sqlite3_blob_open()]) is a harmless no-op. ^Otherwise, if this function ** is passed a valid open blob handle, the values returned by the ** sqlite3_errcode() and sqlite3_errmsg() functions are set before returning. */ | | | | 6260 6261 6262 6263 6264 6265 6266 6267 6268 6269 6270 6271 6272 6273 6274 6275 6276 6277 6278 6279 6280 6281 6282 6283 6284 6285 6286 6287 6288 6289 6290 | ** Calling this function with an argument that is not a NULL pointer or an ** open blob handle results in undefined behaviour. ^Calling this routine ** with a null pointer (such as would be returned by a failed call to ** [sqlite3_blob_open()]) is a harmless no-op. ^Otherwise, if this function ** is passed a valid open blob handle, the values returned by the ** sqlite3_errcode() and sqlite3_errmsg() functions are set before returning. */ SQLITE_API int sqlite3_blob_close(sqlite3_blob *); /* ** CAPI3REF: Return The Size Of An Open BLOB ** METHOD: sqlite3_blob ** ** ^Returns the size in bytes of the BLOB accessible via the ** successfully opened [BLOB handle] in its only argument. ^The ** incremental blob I/O routines can only read or overwriting existing ** blob content; they cannot change the size of a blob. ** ** This routine only works on a [BLOB handle] which has been created ** by a prior successful call to [sqlite3_blob_open()] and which has not ** been closed by [sqlite3_blob_close()]. Passing any other pointer in ** to this routine results in undefined and probably undesirable behavior. */ SQLITE_API int sqlite3_blob_bytes(sqlite3_blob *); /* ** CAPI3REF: Read Data From A BLOB Incrementally ** METHOD: sqlite3_blob ** ** ^(This function is used to read data from an open [BLOB handle] into a ** caller-supplied buffer. N bytes of data are copied into buffer Z |
︙ | ︙ | |||
6125 6126 6127 6128 6129 6130 6131 | ** This routine only works on a [BLOB handle] which has been created ** by a prior successful call to [sqlite3_blob_open()] and which has not ** been closed by [sqlite3_blob_close()]. Passing any other pointer in ** to this routine results in undefined and probably undesirable behavior. ** ** See also: [sqlite3_blob_write()]. */ | | | 6305 6306 6307 6308 6309 6310 6311 6312 6313 6314 6315 6316 6317 6318 6319 | ** This routine only works on a [BLOB handle] which has been created ** by a prior successful call to [sqlite3_blob_open()] and which has not ** been closed by [sqlite3_blob_close()]. Passing any other pointer in ** to this routine results in undefined and probably undesirable behavior. ** ** See also: [sqlite3_blob_write()]. */ SQLITE_API int sqlite3_blob_read(sqlite3_blob *, void *Z, int N, int iOffset); /* ** CAPI3REF: Write Data Into A BLOB Incrementally ** METHOD: sqlite3_blob ** ** ^(This function is used to write data into an open [BLOB handle] from a ** caller-supplied buffer. N bytes of data are copied from the buffer Z |
︙ | ︙ | |||
6167 6168 6169 6170 6171 6172 6173 | ** This routine only works on a [BLOB handle] which has been created ** by a prior successful call to [sqlite3_blob_open()] and which has not ** been closed by [sqlite3_blob_close()]. Passing any other pointer in ** to this routine results in undefined and probably undesirable behavior. ** ** See also: [sqlite3_blob_read()]. */ | | | 6347 6348 6349 6350 6351 6352 6353 6354 6355 6356 6357 6358 6359 6360 6361 | ** This routine only works on a [BLOB handle] which has been created ** by a prior successful call to [sqlite3_blob_open()] and which has not ** been closed by [sqlite3_blob_close()]. Passing any other pointer in ** to this routine results in undefined and probably undesirable behavior. ** ** See also: [sqlite3_blob_read()]. */ SQLITE_API int sqlite3_blob_write(sqlite3_blob *, const void *z, int n, int iOffset); /* ** CAPI3REF: Virtual File System Objects ** ** A virtual filesystem (VFS) is an [sqlite3_vfs] object ** that SQLite uses to interact ** with the underlying operating system. Most SQLite builds come with a |
︙ | ︙ | |||
6198 6199 6200 6201 6202 6203 6204 | ** VFS is registered with a name that is NULL or an empty string, ** then the behavior is undefined. ** ** ^Unregister a VFS with the sqlite3_vfs_unregister() interface. ** ^(If the default VFS is unregistered, another VFS is chosen as ** the default. The choice for the new VFS is arbitrary.)^ */ | | | | | 6378 6379 6380 6381 6382 6383 6384 6385 6386 6387 6388 6389 6390 6391 6392 6393 6394 | ** VFS is registered with a name that is NULL or an empty string, ** then the behavior is undefined. ** ** ^Unregister a VFS with the sqlite3_vfs_unregister() interface. ** ^(If the default VFS is unregistered, another VFS is chosen as ** the default. The choice for the new VFS is arbitrary.)^ */ SQLITE_API sqlite3_vfs *sqlite3_vfs_find(const char *zVfsName); SQLITE_API int sqlite3_vfs_register(sqlite3_vfs*, int makeDflt); SQLITE_API int sqlite3_vfs_unregister(sqlite3_vfs*); /* ** CAPI3REF: Mutexes ** ** The SQLite core uses these routines for thread ** synchronization. Though they are intended for internal ** use by SQLite, code that links against SQLite is |
︙ | ︙ | |||
6316 6317 6318 6319 6320 6321 6322 | ** ** ^If the argument to sqlite3_mutex_enter(), sqlite3_mutex_try(), or ** sqlite3_mutex_leave() is a NULL pointer, then all three routines ** behave as no-ops. ** ** See also: [sqlite3_mutex_held()] and [sqlite3_mutex_notheld()]. */ | | | | | | | 6496 6497 6498 6499 6500 6501 6502 6503 6504 6505 6506 6507 6508 6509 6510 6511 6512 6513 6514 | ** ** ^If the argument to sqlite3_mutex_enter(), sqlite3_mutex_try(), or ** sqlite3_mutex_leave() is a NULL pointer, then all three routines ** behave as no-ops. ** ** See also: [sqlite3_mutex_held()] and [sqlite3_mutex_notheld()]. */ SQLITE_API sqlite3_mutex *sqlite3_mutex_alloc(int); SQLITE_API void sqlite3_mutex_free(sqlite3_mutex*); SQLITE_API void sqlite3_mutex_enter(sqlite3_mutex*); SQLITE_API int sqlite3_mutex_try(sqlite3_mutex*); SQLITE_API void sqlite3_mutex_leave(sqlite3_mutex*); /* ** CAPI3REF: Mutex Methods Object ** ** An instance of this structure defines the low-level routines ** used to allocate and use mutexes. ** |
︙ | ︙ | |||
6430 6431 6432 6433 6434 6435 6436 | ** the reason the mutex does not exist is because the build is not ** using mutexes. And we do not want the assert() containing the ** call to sqlite3_mutex_held() to fail, so a non-zero return is ** the appropriate thing to do. The sqlite3_mutex_notheld() ** interface should also return 1 when given a NULL pointer. */ #ifndef NDEBUG | | | | | | 6610 6611 6612 6613 6614 6615 6616 6617 6618 6619 6620 6621 6622 6623 6624 6625 6626 6627 6628 6629 6630 6631 6632 6633 6634 6635 6636 6637 6638 6639 6640 6641 6642 6643 6644 6645 6646 6647 6648 6649 6650 6651 6652 6653 6654 6655 6656 6657 6658 6659 6660 6661 6662 6663 6664 6665 | ** the reason the mutex does not exist is because the build is not ** using mutexes. And we do not want the assert() containing the ** call to sqlite3_mutex_held() to fail, so a non-zero return is ** the appropriate thing to do. The sqlite3_mutex_notheld() ** interface should also return 1 when given a NULL pointer. */ #ifndef NDEBUG SQLITE_API int sqlite3_mutex_held(sqlite3_mutex*); SQLITE_API int sqlite3_mutex_notheld(sqlite3_mutex*); #endif /* ** CAPI3REF: Mutex Types ** ** The [sqlite3_mutex_alloc()] interface takes a single argument ** which is one of these integer constants. ** ** The set of static mutexes may change from one SQLite release to the ** next. Applications that override the built-in mutex logic must be ** prepared to accommodate additional static mutexes. */ #define SQLITE_MUTEX_FAST 0 #define SQLITE_MUTEX_RECURSIVE 1 #define SQLITE_MUTEX_STATIC_MASTER 2 #define SQLITE_MUTEX_STATIC_MEM 3 /* sqlite3_malloc() */ #define SQLITE_MUTEX_STATIC_MEM2 4 /* NOT USED */ #define SQLITE_MUTEX_STATIC_OPEN 4 /* sqlite3BtreeOpen() */ #define SQLITE_MUTEX_STATIC_PRNG 5 /* sqlite3_randomness() */ #define SQLITE_MUTEX_STATIC_LRU 6 /* lru page list */ #define SQLITE_MUTEX_STATIC_LRU2 7 /* NOT USED */ #define SQLITE_MUTEX_STATIC_PMEM 7 /* sqlite3PageMalloc() */ #define SQLITE_MUTEX_STATIC_APP1 8 /* For use by application */ #define SQLITE_MUTEX_STATIC_APP2 9 /* For use by application */ #define SQLITE_MUTEX_STATIC_APP3 10 /* For use by application */ #define SQLITE_MUTEX_STATIC_VFS1 11 /* For use by built-in VFS */ #define SQLITE_MUTEX_STATIC_VFS2 12 /* For use by extension VFS */ #define SQLITE_MUTEX_STATIC_VFS3 13 /* For use by application VFS */ /* ** CAPI3REF: Retrieve the mutex for a database connection ** METHOD: sqlite3 ** ** ^This interface returns a pointer the [sqlite3_mutex] object that ** serializes access to the [database connection] given in the argument ** when the [threading mode] is Serialized. ** ^If the [threading mode] is Single-thread or Multi-thread then this ** routine returns a NULL pointer. */ SQLITE_API sqlite3_mutex *sqlite3_db_mutex(sqlite3*); /* ** CAPI3REF: Low-Level Control Of Database Files ** METHOD: sqlite3 ** ** ^The [sqlite3_file_control()] interface makes a direct call to the ** xFileControl method for the [sqlite3_io_methods] object associated |
︙ | ︙ | |||
6506 6507 6508 6509 6510 6511 6512 | ** or [sqlite3_errmsg()]. The underlying xFileControl method might ** also return SQLITE_ERROR. There is no way to distinguish between ** an incorrect zDbName and an SQLITE_ERROR return from the underlying ** xFileControl method. ** ** See also: [SQLITE_FCNTL_LOCKSTATE] */ | | | | 6686 6687 6688 6689 6690 6691 6692 6693 6694 6695 6696 6697 6698 6699 6700 6701 6702 6703 6704 6705 6706 6707 6708 6709 6710 6711 6712 6713 6714 6715 6716 6717 6718 6719 | ** or [sqlite3_errmsg()]. The underlying xFileControl method might ** also return SQLITE_ERROR. There is no way to distinguish between ** an incorrect zDbName and an SQLITE_ERROR return from the underlying ** xFileControl method. ** ** See also: [SQLITE_FCNTL_LOCKSTATE] */ SQLITE_API int sqlite3_file_control(sqlite3*, const char *zDbName, int op, void*); /* ** CAPI3REF: Testing Interface ** ** ^The sqlite3_test_control() interface is used to read out internal ** state of SQLite and to inject faults into SQLite for testing ** purposes. ^The first parameter is an operation code that determines ** the number, meaning, and operation of all subsequent parameters. ** ** This interface is not for use by applications. It exists solely ** for verifying the correct operation of the SQLite library. Depending ** on how the SQLite library is compiled, this interface might not exist. ** ** The details of the operation codes, their meanings, the parameters ** they take, and what they do are all subject to change without notice. ** Unlike most of the SQLite API, this function is not guaranteed to ** operate consistently from one release to the next. */ SQLITE_API int sqlite3_test_control(int op, ...); /* ** CAPI3REF: Testing Interface Operation Codes ** ** These constants are the valid operation code parameters used ** as the first argument to [sqlite3_test_control()]. ** |
︙ | ︙ | |||
6554 6555 6556 6557 6558 6559 6560 6561 6562 6563 6564 6565 6566 6567 | #define SQLITE_TESTCTRL_ALWAYS 13 #define SQLITE_TESTCTRL_RESERVE 14 #define SQLITE_TESTCTRL_OPTIMIZATIONS 15 #define SQLITE_TESTCTRL_ISKEYWORD 16 #define SQLITE_TESTCTRL_SCRATCHMALLOC 17 #define SQLITE_TESTCTRL_LOCALTIME_FAULT 18 #define SQLITE_TESTCTRL_EXPLAIN_STMT 19 /* NOT USED */ #define SQLITE_TESTCTRL_NEVER_CORRUPT 20 #define SQLITE_TESTCTRL_VDBE_COVERAGE 21 #define SQLITE_TESTCTRL_BYTEORDER 22 #define SQLITE_TESTCTRL_ISINIT 23 #define SQLITE_TESTCTRL_SORTER_MMAP 24 #define SQLITE_TESTCTRL_IMPOSTER 25 #define SQLITE_TESTCTRL_LAST 25 | > | 6734 6735 6736 6737 6738 6739 6740 6741 6742 6743 6744 6745 6746 6747 6748 | #define SQLITE_TESTCTRL_ALWAYS 13 #define SQLITE_TESTCTRL_RESERVE 14 #define SQLITE_TESTCTRL_OPTIMIZATIONS 15 #define SQLITE_TESTCTRL_ISKEYWORD 16 #define SQLITE_TESTCTRL_SCRATCHMALLOC 17 #define SQLITE_TESTCTRL_LOCALTIME_FAULT 18 #define SQLITE_TESTCTRL_EXPLAIN_STMT 19 /* NOT USED */ #define SQLITE_TESTCTRL_ONCE_RESET_THRESHOLD 19 #define SQLITE_TESTCTRL_NEVER_CORRUPT 20 #define SQLITE_TESTCTRL_VDBE_COVERAGE 21 #define SQLITE_TESTCTRL_BYTEORDER 22 #define SQLITE_TESTCTRL_ISINIT 23 #define SQLITE_TESTCTRL_SORTER_MMAP 24 #define SQLITE_TESTCTRL_IMPOSTER 25 #define SQLITE_TESTCTRL_LAST 25 |
︙ | ︙ | |||
6588 6589 6590 6591 6592 6593 6594 | ** ** If either the current value or the highwater mark is too large to ** be represented by a 32-bit integer, then the values returned by ** sqlite3_status() are undefined. ** ** See also: [sqlite3_db_status()] */ | | | | 6769 6770 6771 6772 6773 6774 6775 6776 6777 6778 6779 6780 6781 6782 6783 6784 | ** ** If either the current value or the highwater mark is too large to ** be represented by a 32-bit integer, then the values returned by ** sqlite3_status() are undefined. ** ** See also: [sqlite3_db_status()] */ SQLITE_API int sqlite3_status(int op, int *pCurrent, int *pHighwater, int resetFlag); SQLITE_API int sqlite3_status64( int op, sqlite3_int64 *pCurrent, sqlite3_int64 *pHighwater, int resetFlag ); |
︙ | ︙ | |||
6714 6715 6716 6717 6718 6719 6720 | ** reset back down to the current value. ** ** ^The sqlite3_db_status() routine returns SQLITE_OK on success and a ** non-zero [error code] on failure. ** ** See also: [sqlite3_status()] and [sqlite3_stmt_status()]. */ | | | 6895 6896 6897 6898 6899 6900 6901 6902 6903 6904 6905 6906 6907 6908 6909 | ** reset back down to the current value. ** ** ^The sqlite3_db_status() routine returns SQLITE_OK on success and a ** non-zero [error code] on failure. ** ** See also: [sqlite3_status()] and [sqlite3_stmt_status()]. */ SQLITE_API int sqlite3_db_status(sqlite3*, int op, int *pCur, int *pHiwtr, int resetFlg); /* ** CAPI3REF: Status Parameters for database connections ** KEYWORDS: {SQLITE_DBSTATUS options} ** ** These constants are the available integer "verbs" that can be passed as ** the second argument to the [sqlite3_db_status()] interface. |
︙ | ︙ | |||
6760 6761 6762 6763 6764 6765 6766 6767 6768 6769 6770 6771 6772 6773 | ** the current value is always zero.)^ ** ** [[SQLITE_DBSTATUS_CACHE_USED]] ^(<dt>SQLITE_DBSTATUS_CACHE_USED</dt> ** <dd>This parameter returns the approximate number of bytes of heap ** memory used by all pager caches associated with the database connection.)^ ** ^The highwater mark associated with SQLITE_DBSTATUS_CACHE_USED is always 0. ** ** [[SQLITE_DBSTATUS_SCHEMA_USED]] ^(<dt>SQLITE_DBSTATUS_SCHEMA_USED</dt> ** <dd>This parameter returns the approximate number of bytes of heap ** memory used to store the schema for all databases associated ** with the connection - main, temp, and any [ATTACH]-ed databases.)^ ** ^The full amount of memory used by the schemas is reported, even if the ** schema memory is shared with other database connections due to ** [shared cache mode] being enabled. | > > > > > > > > > > > > | 6941 6942 6943 6944 6945 6946 6947 6948 6949 6950 6951 6952 6953 6954 6955 6956 6957 6958 6959 6960 6961 6962 6963 6964 6965 6966 | ** the current value is always zero.)^ ** ** [[SQLITE_DBSTATUS_CACHE_USED]] ^(<dt>SQLITE_DBSTATUS_CACHE_USED</dt> ** <dd>This parameter returns the approximate number of bytes of heap ** memory used by all pager caches associated with the database connection.)^ ** ^The highwater mark associated with SQLITE_DBSTATUS_CACHE_USED is always 0. ** ** [[SQLITE_DBSTATUS_CACHE_USED_SHARED]] ** ^(<dt>SQLITE_DBSTATUS_CACHE_USED_SHARED</dt> ** <dd>This parameter is similar to DBSTATUS_CACHE_USED, except that if a ** pager cache is shared between two or more connections the bytes of heap ** memory used by that pager cache is divided evenly between the attached ** connections.)^ In other words, if none of the pager caches associated ** with the database connection are shared, this request returns the same ** value as DBSTATUS_CACHE_USED. Or, if one or more or the pager caches are ** shared, the value returned by this call will be smaller than that returned ** by DBSTATUS_CACHE_USED. ^The highwater mark associated with ** SQLITE_DBSTATUS_CACHE_USED_SHARED is always 0. ** ** [[SQLITE_DBSTATUS_SCHEMA_USED]] ^(<dt>SQLITE_DBSTATUS_SCHEMA_USED</dt> ** <dd>This parameter returns the approximate number of bytes of heap ** memory used to store the schema for all databases associated ** with the connection - main, temp, and any [ATTACH]-ed databases.)^ ** ^The full amount of memory used by the schemas is reported, even if the ** schema memory is shared with other database connections due to ** [shared cache mode] being enabled. |
︙ | ︙ | |||
6817 6818 6819 6820 6821 6822 6823 | #define SQLITE_DBSTATUS_LOOKASIDE_HIT 4 #define SQLITE_DBSTATUS_LOOKASIDE_MISS_SIZE 5 #define SQLITE_DBSTATUS_LOOKASIDE_MISS_FULL 6 #define SQLITE_DBSTATUS_CACHE_HIT 7 #define SQLITE_DBSTATUS_CACHE_MISS 8 #define SQLITE_DBSTATUS_CACHE_WRITE 9 #define SQLITE_DBSTATUS_DEFERRED_FKS 10 | > | | 7010 7011 7012 7013 7014 7015 7016 7017 7018 7019 7020 7021 7022 7023 7024 7025 | #define SQLITE_DBSTATUS_LOOKASIDE_HIT 4 #define SQLITE_DBSTATUS_LOOKASIDE_MISS_SIZE 5 #define SQLITE_DBSTATUS_LOOKASIDE_MISS_FULL 6 #define SQLITE_DBSTATUS_CACHE_HIT 7 #define SQLITE_DBSTATUS_CACHE_MISS 8 #define SQLITE_DBSTATUS_CACHE_WRITE 9 #define SQLITE_DBSTATUS_DEFERRED_FKS 10 #define SQLITE_DBSTATUS_CACHE_USED_SHARED 11 #define SQLITE_DBSTATUS_MAX 11 /* Largest defined DBSTATUS */ /* ** CAPI3REF: Prepared Statement Status ** METHOD: sqlite3_stmt ** ** ^(Each prepared statement maintains various |
︙ | ︙ | |||
6844 6845 6846 6847 6848 6849 6850 | ** to be interrogated.)^ ** ^The current value of the requested counter is returned. ** ^If the resetFlg is true, then the counter is reset to zero after this ** interface call returns. ** ** See also: [sqlite3_status()] and [sqlite3_db_status()]. */ | | | 7038 7039 7040 7041 7042 7043 7044 7045 7046 7047 7048 7049 7050 7051 7052 | ** to be interrogated.)^ ** ^The current value of the requested counter is returned. ** ^If the resetFlg is true, then the counter is reset to zero after this ** interface call returns. ** ** See also: [sqlite3_status()] and [sqlite3_db_status()]. */ SQLITE_API int sqlite3_stmt_status(sqlite3_stmt*, int op,int resetFlg); /* ** CAPI3REF: Status Parameters for prepared statements ** KEYWORDS: {SQLITE_STMTSTATUS counter} {SQLITE_STMTSTATUS counters} ** ** These preprocessor macros define integer codes that name counter ** values associated with the [sqlite3_stmt_status()] interface. |
︙ | ︙ | |||
7313 7314 7315 7316 7317 7318 7319 | ** The [sqlite3_backup] object itself is partially threadsafe. Multiple ** threads may safely make multiple concurrent calls to sqlite3_backup_step(). ** However, the sqlite3_backup_remaining() and sqlite3_backup_pagecount() ** APIs are not strictly speaking threadsafe. If they are invoked at the ** same time as another thread is invoking sqlite3_backup_step() it is ** possible that they return invalid values. */ | | | | | | | 7507 7508 7509 7510 7511 7512 7513 7514 7515 7516 7517 7518 7519 7520 7521 7522 7523 7524 7525 7526 7527 7528 7529 7530 | ** The [sqlite3_backup] object itself is partially threadsafe. Multiple ** threads may safely make multiple concurrent calls to sqlite3_backup_step(). ** However, the sqlite3_backup_remaining() and sqlite3_backup_pagecount() ** APIs are not strictly speaking threadsafe. If they are invoked at the ** same time as another thread is invoking sqlite3_backup_step() it is ** possible that they return invalid values. */ SQLITE_API sqlite3_backup *sqlite3_backup_init( sqlite3 *pDest, /* Destination database handle */ const char *zDestName, /* Destination database name */ sqlite3 *pSource, /* Source database handle */ const char *zSourceName /* Source database name */ ); SQLITE_API int sqlite3_backup_step(sqlite3_backup *p, int nPage); SQLITE_API int sqlite3_backup_finish(sqlite3_backup *p); SQLITE_API int sqlite3_backup_remaining(sqlite3_backup *p); SQLITE_API int sqlite3_backup_pagecount(sqlite3_backup *p); /* ** CAPI3REF: Unlock Notification ** METHOD: sqlite3 ** ** ^When running in shared-cache mode, a database operation may fail with ** an [SQLITE_LOCKED] error if the required locks on the shared-cache or |
︙ | ︙ | |||
7439 7440 7441 7442 7443 7444 7445 | ** ** One way around this problem is to check the extended error code returned ** by an sqlite3_step() call. ^(If there is a blocking connection, then the ** extended error code is set to SQLITE_LOCKED_SHAREDCACHE. Otherwise, in ** the special "DROP TABLE/INDEX" case, the extended error code is just ** SQLITE_LOCKED.)^ */ | | | | | | 7633 7634 7635 7636 7637 7638 7639 7640 7641 7642 7643 7644 7645 7646 7647 7648 7649 7650 7651 7652 7653 7654 7655 7656 7657 7658 7659 7660 7661 7662 7663 7664 7665 7666 7667 7668 7669 7670 7671 7672 7673 7674 7675 7676 7677 7678 7679 7680 | ** ** One way around this problem is to check the extended error code returned ** by an sqlite3_step() call. ^(If there is a blocking connection, then the ** extended error code is set to SQLITE_LOCKED_SHAREDCACHE. Otherwise, in ** the special "DROP TABLE/INDEX" case, the extended error code is just ** SQLITE_LOCKED.)^ */ SQLITE_API int sqlite3_unlock_notify( sqlite3 *pBlocked, /* Waiting connection */ void (*xNotify)(void **apArg, int nArg), /* Callback function to invoke */ void *pNotifyArg /* Argument to pass to xNotify */ ); /* ** CAPI3REF: String Comparison ** ** ^The [sqlite3_stricmp()] and [sqlite3_strnicmp()] APIs allow applications ** and extensions to compare the contents of two buffers containing UTF-8 ** strings in a case-independent fashion, using the same definition of "case ** independence" that SQLite uses internally when comparing identifiers. */ SQLITE_API int sqlite3_stricmp(const char *, const char *); SQLITE_API int sqlite3_strnicmp(const char *, const char *, int); /* ** CAPI3REF: String Globbing * ** ^The [sqlite3_strglob(P,X)] interface returns zero if and only if ** string X matches the [GLOB] pattern P. ** ^The definition of [GLOB] pattern matching used in ** [sqlite3_strglob(P,X)] is the same as for the "X GLOB P" operator in the ** SQL dialect understood by SQLite. ^The [sqlite3_strglob(P,X)] function ** is case sensitive. ** ** Note that this routine returns zero on a match and non-zero if the strings ** do not match, the same as [sqlite3_stricmp()] and [sqlite3_strnicmp()]. ** ** See also: [sqlite3_strlike()]. */ SQLITE_API int sqlite3_strglob(const char *zGlob, const char *zStr); /* ** CAPI3REF: String LIKE Matching * ** ^The [sqlite3_strlike(P,X,E)] interface returns zero if and only if ** string X matches the [LIKE] pattern P with escape character E. ** ^The definition of [LIKE] pattern matching used in |
︙ | ︙ | |||
7495 7496 7497 7498 7499 7500 7501 | ** only ASCII characters are case folded. ** ** Note that this routine returns zero on a match and non-zero if the strings ** do not match, the same as [sqlite3_stricmp()] and [sqlite3_strnicmp()]. ** ** See also: [sqlite3_strglob()]. */ | | | 7689 7690 7691 7692 7693 7694 7695 7696 7697 7698 7699 7700 7701 7702 7703 | ** only ASCII characters are case folded. ** ** Note that this routine returns zero on a match and non-zero if the strings ** do not match, the same as [sqlite3_stricmp()] and [sqlite3_strnicmp()]. ** ** See also: [sqlite3_strglob()]. */ SQLITE_API int sqlite3_strlike(const char *zGlob, const char *zStr, unsigned int cEsc); /* ** CAPI3REF: Error Logging Interface ** ** ^The [sqlite3_log()] interface writes a message into the [error log] ** established by the [SQLITE_CONFIG_LOG] option to [sqlite3_config()]. ** ^If logging is enabled, the zFormat string and subsequent arguments are |
︙ | ︙ | |||
7518 7519 7520 7521 7522 7523 7524 | ** ** To avoid deadlocks and other threading problems, the sqlite3_log() routine ** will not use dynamically allocated memory. The log message is stored in ** a fixed-length buffer on the stack. If the log message is longer than ** a few hundred characters, it will be truncated to the length of the ** buffer. */ | | | 7712 7713 7714 7715 7716 7717 7718 7719 7720 7721 7722 7723 7724 7725 7726 | ** ** To avoid deadlocks and other threading problems, the sqlite3_log() routine ** will not use dynamically allocated memory. The log message is stored in ** a fixed-length buffer on the stack. If the log message is longer than ** a few hundred characters, it will be truncated to the length of the ** buffer. */ SQLITE_API void sqlite3_log(int iErrCode, const char *zFormat, ...); /* ** CAPI3REF: Write-Ahead Log Commit Hook ** METHOD: sqlite3 ** ** ^The [sqlite3_wal_hook()] function is used to register a callback that ** is invoked each time data is committed to a database in wal mode. |
︙ | ︙ | |||
7554 7555 7556 7557 7558 7559 7560 | ** A single database handle may have at most a single write-ahead log callback ** registered at one time. ^Calling [sqlite3_wal_hook()] replaces any ** previously registered write-ahead log callback. ^Note that the ** [sqlite3_wal_autocheckpoint()] interface and the ** [wal_autocheckpoint pragma] both invoke [sqlite3_wal_hook()] and will ** overwrite any prior [sqlite3_wal_hook()] settings. */ | | | 7748 7749 7750 7751 7752 7753 7754 7755 7756 7757 7758 7759 7760 7761 7762 | ** A single database handle may have at most a single write-ahead log callback ** registered at one time. ^Calling [sqlite3_wal_hook()] replaces any ** previously registered write-ahead log callback. ^Note that the ** [sqlite3_wal_autocheckpoint()] interface and the ** [wal_autocheckpoint pragma] both invoke [sqlite3_wal_hook()] and will ** overwrite any prior [sqlite3_wal_hook()] settings. */ SQLITE_API void *sqlite3_wal_hook( sqlite3*, int(*)(void *,sqlite3*,const char*,int), void* ); /* ** CAPI3REF: Configure an auto-checkpoint |
︙ | ︙ | |||
7589 7590 7591 7592 7593 7594 7595 | ** ** ^Every new [database connection] defaults to having the auto-checkpoint ** enabled with a threshold of 1000 or [SQLITE_DEFAULT_WAL_AUTOCHECKPOINT] ** pages. The use of this interface ** is only necessary if the default setting is found to be suboptimal ** for a particular application. */ | | | 7783 7784 7785 7786 7787 7788 7789 7790 7791 7792 7793 7794 7795 7796 7797 | ** ** ^Every new [database connection] defaults to having the auto-checkpoint ** enabled with a threshold of 1000 or [SQLITE_DEFAULT_WAL_AUTOCHECKPOINT] ** pages. The use of this interface ** is only necessary if the default setting is found to be suboptimal ** for a particular application. */ SQLITE_API int sqlite3_wal_autocheckpoint(sqlite3 *db, int N); /* ** CAPI3REF: Checkpoint a database ** METHOD: sqlite3 ** ** ^(The sqlite3_wal_checkpoint(D,X) is equivalent to ** [sqlite3_wal_checkpoint_v2](D,X,[SQLITE_CHECKPOINT_PASSIVE],0,0).)^ |
︙ | ︙ | |||
7611 7612 7613 7614 7615 7616 7617 | ** This interface used to be the only way to cause a checkpoint to ** occur. But then the newer and more powerful [sqlite3_wal_checkpoint_v2()] ** interface was added. This interface is retained for backwards ** compatibility and as a convenience for applications that need to manually ** start a callback but which do not need the full power (and corresponding ** complication) of [sqlite3_wal_checkpoint_v2()]. */ | | | 7805 7806 7807 7808 7809 7810 7811 7812 7813 7814 7815 7816 7817 7818 7819 | ** This interface used to be the only way to cause a checkpoint to ** occur. But then the newer and more powerful [sqlite3_wal_checkpoint_v2()] ** interface was added. This interface is retained for backwards ** compatibility and as a convenience for applications that need to manually ** start a callback but which do not need the full power (and corresponding ** complication) of [sqlite3_wal_checkpoint_v2()]. */ SQLITE_API int sqlite3_wal_checkpoint(sqlite3 *db, const char *zDb); /* ** CAPI3REF: Checkpoint a database ** METHOD: sqlite3 ** ** ^(The sqlite3_wal_checkpoint_v2(D,X,M,L,C) interface runs a checkpoint ** operation on database X of [database connection] D in mode M. Status |
︙ | ︙ | |||
7705 7706 7707 7708 7709 7710 7711 | ** the sqlite3_wal_checkpoint_v2() interface ** sets the error information that is queried by ** [sqlite3_errcode()] and [sqlite3_errmsg()]. ** ** ^The [PRAGMA wal_checkpoint] command can be used to invoke this interface ** from SQL. */ | | | 7899 7900 7901 7902 7903 7904 7905 7906 7907 7908 7909 7910 7911 7912 7913 | ** the sqlite3_wal_checkpoint_v2() interface ** sets the error information that is queried by ** [sqlite3_errcode()] and [sqlite3_errmsg()]. ** ** ^The [PRAGMA wal_checkpoint] command can be used to invoke this interface ** from SQL. */ SQLITE_API int sqlite3_wal_checkpoint_v2( sqlite3 *db, /* Database handle */ const char *zDb, /* Name of attached database (or NULL) */ int eMode, /* SQLITE_CHECKPOINT_* value */ int *pnLog, /* OUT: Size of WAL log in frames */ int *pnCkpt /* OUT: Total number of frames checkpointed */ ); |
︙ | ︙ | |||
7741 7742 7743 7744 7745 7746 7747 | ** If this interface is invoked outside the context of an xConnect or ** xCreate virtual table method then the behavior is undefined. ** ** At present, there is only one option that may be configured using ** this function. (See [SQLITE_VTAB_CONSTRAINT_SUPPORT].) Further options ** may be added in the future. */ | | | 7935 7936 7937 7938 7939 7940 7941 7942 7943 7944 7945 7946 7947 7948 7949 | ** If this interface is invoked outside the context of an xConnect or ** xCreate virtual table method then the behavior is undefined. ** ** At present, there is only one option that may be configured using ** this function. (See [SQLITE_VTAB_CONSTRAINT_SUPPORT].) Further options ** may be added in the future. */ SQLITE_API int sqlite3_vtab_config(sqlite3*, int op, ...); /* ** CAPI3REF: Virtual Table Configuration Options ** ** These macros define the various options to the ** [sqlite3_vtab_config()] interface that [virtual table] implementations ** can use to customize and optimize their behavior. |
︙ | ︙ | |||
7794 7795 7796 7797 7798 7799 7800 | ** This function may only be called from within a call to the [xUpdate] method ** of a [virtual table] implementation for an INSERT or UPDATE operation. ^The ** value returned is one of [SQLITE_ROLLBACK], [SQLITE_IGNORE], [SQLITE_FAIL], ** [SQLITE_ABORT], or [SQLITE_REPLACE], according to the [ON CONFLICT] mode ** of the SQL statement that triggered the call to the [xUpdate] method of the ** [virtual table]. */ | | | 7988 7989 7990 7991 7992 7993 7994 7995 7996 7997 7998 7999 8000 8001 8002 | ** This function may only be called from within a call to the [xUpdate] method ** of a [virtual table] implementation for an INSERT or UPDATE operation. ^The ** value returned is one of [SQLITE_ROLLBACK], [SQLITE_IGNORE], [SQLITE_FAIL], ** [SQLITE_ABORT], or [SQLITE_REPLACE], according to the [ON CONFLICT] mode ** of the SQL statement that triggered the call to the [xUpdate] method of the ** [virtual table]. */ SQLITE_API int sqlite3_vtab_on_conflict(sqlite3 *); /* ** CAPI3REF: Conflict resolution modes ** KEYWORDS: {conflict resolution mode} ** ** These constants are returned by [sqlite3_vtab_on_conflict()] to ** inform a [virtual table] implementation what the [ON CONFLICT] mode |
︙ | ︙ | |||
7899 7900 7901 7902 7903 7904 7905 | ** ^Statistics might not be available for all loops in all statements. ^In cases ** where there exist loops with no available statistics, this function behaves ** as if the loop did not exist - it returns non-zero and leave the variable ** that pOut points to unchanged. ** ** See also: [sqlite3_stmt_scanstatus_reset()] */ | | | | 8093 8094 8095 8096 8097 8098 8099 8100 8101 8102 8103 8104 8105 8106 8107 8108 8109 8110 8111 8112 8113 8114 8115 8116 8117 8118 8119 8120 8121 8122 8123 | ** ^Statistics might not be available for all loops in all statements. ^In cases ** where there exist loops with no available statistics, this function behaves ** as if the loop did not exist - it returns non-zero and leave the variable ** that pOut points to unchanged. ** ** See also: [sqlite3_stmt_scanstatus_reset()] */ SQLITE_API int sqlite3_stmt_scanstatus( sqlite3_stmt *pStmt, /* Prepared statement for which info desired */ int idx, /* Index of loop to report on */ int iScanStatusOp, /* Information desired. SQLITE_SCANSTAT_* */ void *pOut /* Result written here */ ); /* ** CAPI3REF: Zero Scan-Status Counters ** METHOD: sqlite3_stmt ** ** ^Zero all [sqlite3_stmt_scanstatus()] related event counters. ** ** This API is only available if the library is built with pre-processor ** symbol [SQLITE_ENABLE_STMT_SCANSTATUS] defined. */ SQLITE_API void sqlite3_stmt_scanstatus_reset(sqlite3_stmt*); /* ** CAPI3REF: Flush caches to disk mid-transaction ** ** ^If a write-transaction is open on [database connection] D when the ** [sqlite3_db_cacheflush(D)] interface invoked, any dirty ** pages in the pager-cache that are not currently in use are written out |
︙ | ︙ | |||
7947 7948 7949 7950 7951 7952 7953 | ** abandoned and an SQLite [error code] is returned to the caller immediately. ** ** ^Otherwise, if no error occurs, [sqlite3_db_cacheflush()] returns SQLITE_OK. ** ** ^This function does not set the database handle error code or message ** returned by the [sqlite3_errcode()] and [sqlite3_errmsg()] functions. */ | | | 8141 8142 8143 8144 8145 8146 8147 8148 8149 8150 8151 8152 8153 8154 8155 | ** abandoned and an SQLite [error code] is returned to the caller immediately. ** ** ^Otherwise, if no error occurs, [sqlite3_db_cacheflush()] returns SQLITE_OK. ** ** ^This function does not set the database handle error code or message ** returned by the [sqlite3_errcode()] and [sqlite3_errmsg()] functions. */ SQLITE_API int sqlite3_db_cacheflush(sqlite3*); /* ** CAPI3REF: The pre-update hook. ** ** ^These interfaces are only available if SQLite is compiled using the ** [SQLITE_ENABLE_PREUPDATE_HOOK] compile-time option. ** |
︙ | ︙ | |||
7973 7974 7975 7976 7977 7978 7979 | ** ^The preupdate hook only fires for changes to [rowid tables]; the preupdate ** hook is not invoked for changes to [virtual tables] or [WITHOUT ROWID] ** tables. ** ** ^The second parameter to the preupdate callback is a pointer to ** the [database connection] that registered the preupdate hook. ** ^The third parameter to the preupdate callback is one of the constants | | | 8167 8168 8169 8170 8171 8172 8173 8174 8175 8176 8177 8178 8179 8180 8181 | ** ^The preupdate hook only fires for changes to [rowid tables]; the preupdate ** hook is not invoked for changes to [virtual tables] or [WITHOUT ROWID] ** tables. ** ** ^The second parameter to the preupdate callback is a pointer to ** the [database connection] that registered the preupdate hook. ** ^The third parameter to the preupdate callback is one of the constants ** [SQLITE_INSERT], [SQLITE_DELETE], or [SQLITE_UPDATE] to identify the ** kind of update operation that is about to occur. ** ^(The fourth parameter to the preupdate callback is the name of the ** database within the database connection that is being modified. This ** will be "main" for the main database or "temp" for TEMP tables or ** the name given after the AS keyword in the [ATTACH] statement for attached ** databases.)^ ** ^The fifth parameter to the preupdate callback is the name of the |
︙ | ︙ | |||
8027 8028 8029 8030 8031 8032 8033 | ** callback was invoked as a result of a direct insert, update, or delete ** operation; or 1 for inserts, updates, or deletes invoked by top-level ** triggers; or 2 for changes resulting from triggers called by top-level ** triggers; and so forth. ** ** See also: [sqlite3_update_hook()] */ | | | | | | | | 8221 8222 8223 8224 8225 8226 8227 8228 8229 8230 8231 8232 8233 8234 8235 8236 8237 8238 8239 8240 8241 8242 8243 8244 8245 8246 8247 8248 8249 8250 8251 8252 8253 8254 8255 8256 8257 8258 8259 8260 8261 8262 8263 | ** callback was invoked as a result of a direct insert, update, or delete ** operation; or 1 for inserts, updates, or deletes invoked by top-level ** triggers; or 2 for changes resulting from triggers called by top-level ** triggers; and so forth. ** ** See also: [sqlite3_update_hook()] */ SQLITE_API SQLITE_EXPERIMENTAL void *sqlite3_preupdate_hook( sqlite3 *db, void(*xPreUpdate)( void *pCtx, /* Copy of third arg to preupdate_hook() */ sqlite3 *db, /* Database handle */ int op, /* SQLITE_UPDATE, DELETE or INSERT */ char const *zDb, /* Database name */ char const *zName, /* Table name */ sqlite3_int64 iKey1, /* Rowid of row about to be deleted/updated */ sqlite3_int64 iKey2 /* New rowid value (for a rowid UPDATE) */ ), void* ); SQLITE_API SQLITE_EXPERIMENTAL int sqlite3_preupdate_old(sqlite3 *, int, sqlite3_value **); SQLITE_API SQLITE_EXPERIMENTAL int sqlite3_preupdate_count(sqlite3 *); SQLITE_API SQLITE_EXPERIMENTAL int sqlite3_preupdate_depth(sqlite3 *); SQLITE_API SQLITE_EXPERIMENTAL int sqlite3_preupdate_new(sqlite3 *, int, sqlite3_value **); /* ** CAPI3REF: Low-level system error code ** ** ^Attempt to return the underlying operating system error code or error ** number that caused the most recent I/O error or failure to open a file. ** The return value is OS-dependent. For example, on unix systems, after ** [sqlite3_open_v2()] returns [SQLITE_CANTOPEN], this interface could be ** called to get back the underlying "errno" that caused the problem, such ** as ENOSPC, EAUTH, EISDIR, and so forth. */ SQLITE_API int sqlite3_system_errno(sqlite3*); /* ** CAPI3REF: Database Snapshot ** KEYWORDS: {snapshot} ** EXPERIMENTAL ** ** An instance of the snapshot object records the state of a [WAL mode] |
︙ | ︙ | |||
8105 8106 8107 8108 8109 8110 8111 | ** The [sqlite3_snapshot] object returned from a successful call to ** [sqlite3_snapshot_get()] must be freed using [sqlite3_snapshot_free()] ** to avoid a memory leak. ** ** The [sqlite3_snapshot_get()] interface is only available when the ** SQLITE_ENABLE_SNAPSHOT compile-time option is used. */ | | | 8299 8300 8301 8302 8303 8304 8305 8306 8307 8308 8309 8310 8311 8312 8313 | ** The [sqlite3_snapshot] object returned from a successful call to ** [sqlite3_snapshot_get()] must be freed using [sqlite3_snapshot_free()] ** to avoid a memory leak. ** ** The [sqlite3_snapshot_get()] interface is only available when the ** SQLITE_ENABLE_SNAPSHOT compile-time option is used. */ SQLITE_API SQLITE_EXPERIMENTAL int sqlite3_snapshot_get( sqlite3 *db, const char *zSchema, sqlite3_snapshot **ppSnapshot ); /* ** CAPI3REF: Start a read transaction on an historical snapshot |
︙ | ︙ | |||
8143 8144 8145 8146 8147 8148 8149 | ** after the most recent I/O on the database connection.)^ ** (Hint: Run "[PRAGMA application_id]" against a newly opened ** database connection in order to make it ready to use snapshots.) ** ** The [sqlite3_snapshot_open()] interface is only available when the ** SQLITE_ENABLE_SNAPSHOT compile-time option is used. */ | | | | 8337 8338 8339 8340 8341 8342 8343 8344 8345 8346 8347 8348 8349 8350 8351 8352 8353 8354 8355 8356 8357 8358 8359 8360 8361 8362 8363 8364 8365 8366 8367 8368 | ** after the most recent I/O on the database connection.)^ ** (Hint: Run "[PRAGMA application_id]" against a newly opened ** database connection in order to make it ready to use snapshots.) ** ** The [sqlite3_snapshot_open()] interface is only available when the ** SQLITE_ENABLE_SNAPSHOT compile-time option is used. */ SQLITE_API SQLITE_EXPERIMENTAL int sqlite3_snapshot_open( sqlite3 *db, const char *zSchema, sqlite3_snapshot *pSnapshot ); /* ** CAPI3REF: Destroy a snapshot ** EXPERIMENTAL ** ** ^The [sqlite3_snapshot_free(P)] interface destroys [sqlite3_snapshot] P. ** The application must eventually free every [sqlite3_snapshot] object ** using this routine to avoid a memory leak. ** ** The [sqlite3_snapshot_free()] interface is only available when the ** SQLITE_ENABLE_SNAPSHOT compile-time option is used. */ SQLITE_API SQLITE_EXPERIMENTAL void sqlite3_snapshot_free(sqlite3_snapshot*); /* ** CAPI3REF: Compare the ages of two snapshot handles. ** EXPERIMENTAL ** ** The sqlite3_snapshot_cmp(P1, P2) interface is used to compare the ages ** of two valid snapshot handles. |
︙ | ︙ | |||
8184 8185 8186 8187 8188 8189 8190 | ** wal file was last deleted, the value returned by this function ** is undefined. ** ** Otherwise, this API returns a negative value if P1 refers to an older ** snapshot than P2, zero if the two handles refer to the same database ** snapshot, and a positive value if P1 is a newer snapshot than P2. */ | | | | 8378 8379 8380 8381 8382 8383 8384 8385 8386 8387 8388 8389 8390 8391 8392 8393 8394 8395 8396 8397 8398 8399 8400 8401 8402 8403 8404 8405 8406 8407 8408 | ** wal file was last deleted, the value returned by this function ** is undefined. ** ** Otherwise, this API returns a negative value if P1 refers to an older ** snapshot than P2, zero if the two handles refer to the same database ** snapshot, and a positive value if P1 is a newer snapshot than P2. */ SQLITE_API SQLITE_EXPERIMENTAL int sqlite3_snapshot_cmp( sqlite3_snapshot *p1, sqlite3_snapshot *p2 ); /* ** Undo the hack that converts floating point types to integer for ** builds on processors without floating point support. */ #ifdef SQLITE_OMIT_FLOATING_POINT # undef double #endif #ifdef __cplusplus } /* End of the 'extern "C"' block */ #endif #endif /* SQLITE3_H */ /******** Begin file sqlite3rtree.h *********/ /* ** 2010 August 30 ** ** The author disclaims copyright to this source code. In place of ** a legal notice, here is a blessing: |
︙ | ︙ | |||
8242 8243 8244 8245 8246 8247 8248 | /* ** Register a geometry callback named zGeom that can be used as part of an ** R-Tree geometry query as follows: ** ** SELECT ... FROM <rtree> WHERE <rtree col> MATCH $zGeom(... params ...) */ | | | 8436 8437 8438 8439 8440 8441 8442 8443 8444 8445 8446 8447 8448 8449 8450 | /* ** Register a geometry callback named zGeom that can be used as part of an ** R-Tree geometry query as follows: ** ** SELECT ... FROM <rtree> WHERE <rtree col> MATCH $zGeom(... params ...) */ SQLITE_API int sqlite3_rtree_geometry_callback( sqlite3 *db, const char *zGeom, int (*xGeom)(sqlite3_rtree_geometry*, int, sqlite3_rtree_dbl*,int*), void *pContext ); |
︙ | ︙ | |||
8268 8269 8270 8271 8272 8273 8274 | /* ** Register a 2nd-generation geometry callback named zScore that can be ** used as part of an R-Tree geometry query as follows: ** ** SELECT ... FROM <rtree> WHERE <rtree col> MATCH $zQueryFunc(... params ...) */ | | | 8462 8463 8464 8465 8466 8467 8468 8469 8470 8471 8472 8473 8474 8475 8476 | /* ** Register a 2nd-generation geometry callback named zScore that can be ** used as part of an R-Tree geometry query as follows: ** ** SELECT ... FROM <rtree> WHERE <rtree col> MATCH $zQueryFunc(... params ...) */ SQLITE_API int sqlite3_rtree_query_callback( sqlite3 *db, const char *zQueryFunc, int (*xQueryFunc)(sqlite3_rtree_query_info*), void *pContext, void (*xDestructor)(void*) ); |
︙ | ︙ | |||
8480 8481 8482 8483 8484 8485 8486 | const char *zTab /* Table name */ ); /* ** CAPI3REF: Set a table filter on a Session Object. ** ** The second argument (xFilter) is the "filter callback". For changes to rows | | | 8674 8675 8676 8677 8678 8679 8680 8681 8682 8683 8684 8685 8686 8687 8688 | const char *zTab /* Table name */ ); /* ** CAPI3REF: Set a table filter on a Session Object. ** ** The second argument (xFilter) is the "filter callback". For changes to rows ** in tables that are not attached to the Session object, the filter is called ** to determine whether changes to the table's rows should be tracked or not. ** If xFilter returns 0, changes is not tracked. Note that once a table is ** attached, xFilter will not be called again. */ void sqlite3session_table_filter( sqlite3_session *pSession, /* Session object */ int(*xFilter)( |
︙ | ︙ | |||
8746 8747 8748 8749 8750 8751 8752 | ** destroyed. ** ** Assuming the changeset blob was created by one of the ** [sqlite3session_changeset()], [sqlite3changeset_concat()] or ** [sqlite3changeset_invert()] functions, all changes within the changeset ** that apply to a single table are grouped together. This means that when ** an application iterates through a changeset using an iterator created by | | | 8940 8941 8942 8943 8944 8945 8946 8947 8948 8949 8950 8951 8952 8953 8954 | ** destroyed. ** ** Assuming the changeset blob was created by one of the ** [sqlite3session_changeset()], [sqlite3changeset_concat()] or ** [sqlite3changeset_invert()] functions, all changes within the changeset ** that apply to a single table are grouped together. This means that when ** an application iterates through a changeset using an iterator created by ** this function, all changes that relate to a single table are visited ** consecutively. There is no chance that the iterator will visit a change ** the applies to table X, then one for table Y, and then later on visit ** another change for table X. */ int sqlite3changeset_start( sqlite3_changeset_iter **pp, /* OUT: New changeset iterator handle */ int nChangeset, /* Size of changeset blob in bytes */ |
︙ | ︙ | |||
8833 8834 8835 8836 8837 8838 8839 | ** This function is used to find which columns comprise the PRIMARY KEY of ** the table modified by the change that iterator pIter currently points to. ** If successful, *pabPK is set to point to an array of nCol entries, where ** nCol is the number of columns in the table. Elements of *pabPK are set to ** 0x01 if the corresponding column is part of the tables primary key, or ** 0x00 if it is not. ** | | | 9027 9028 9029 9030 9031 9032 9033 9034 9035 9036 9037 9038 9039 9040 9041 | ** This function is used to find which columns comprise the PRIMARY KEY of ** the table modified by the change that iterator pIter currently points to. ** If successful, *pabPK is set to point to an array of nCol entries, where ** nCol is the number of columns in the table. Elements of *pabPK are set to ** 0x01 if the corresponding column is part of the tables primary key, or ** 0x00 if it is not. ** ** If argument pnCol is not NULL, then *pnCol is set to the number of columns ** in the table. ** ** If this function is called when the iterator does not point to a valid ** entry, SQLITE_MISUSE is returned and the output variables zeroed. Otherwise, ** SQLITE_OK is returned and the output variables populated as described ** above. */ |
︙ | ︙ | |||
9050 9051 9052 9053 9054 9055 9056 | void *pB, /* Pointer to buffer containing changeset B */ int *pnOut, /* OUT: Number of bytes in output changeset */ void **ppOut /* OUT: Buffer containing output changeset */ ); /* | | | | 9244 9245 9246 9247 9248 9249 9250 9251 9252 9253 9254 9255 9256 9257 9258 9259 9260 9261 9262 9263 | void *pB, /* Pointer to buffer containing changeset B */ int *pnOut, /* OUT: Number of bytes in output changeset */ void **ppOut /* OUT: Buffer containing output changeset */ ); /* ** CAPI3REF: Changegroup Handle */ typedef struct sqlite3_changegroup sqlite3_changegroup; /* ** CAPI3REF: Create A New Changegroup Object ** ** An sqlite3_changegroup object is used to combine two or more changesets ** (or patchsets) into a single changeset (or patchset). A single changegroup ** object may combine changesets or patchsets, but not both. The output is ** always in the same format as the input. ** ** If successful, this function returns SQLITE_OK and populates (*pp) with |
︙ | ︙ | |||
9092 9093 9094 9095 9096 9097 9098 9099 9100 9101 9102 9103 9104 9105 9106 9107 9108 9109 9110 9111 9112 | ** As well as the regular sqlite3changegroup_add() and ** sqlite3changegroup_output() functions, also available are the streaming ** versions sqlite3changegroup_add_strm() and sqlite3changegroup_output_strm(). */ int sqlite3changegroup_new(sqlite3_changegroup **pp); /* ** Add all changes within the changeset (or patchset) in buffer pData (size ** nData bytes) to the changegroup. ** ** If the buffer contains a patchset, then all prior calls to this function ** on the same changegroup object must also have specified patchsets. Or, if ** the buffer contains a changeset, so must have the earlier calls to this ** function. Otherwise, SQLITE_ERROR is returned and no changes are added ** to the changegroup. ** ** Rows within the changeset and changegroup are identified by the values in ** their PRIMARY KEY columns. A change in the changeset is considered to ** apply to the same row as a change already present in the changegroup if ** the two rows have the same primary key. ** | > > | | 9286 9287 9288 9289 9290 9291 9292 9293 9294 9295 9296 9297 9298 9299 9300 9301 9302 9303 9304 9305 9306 9307 9308 9309 9310 9311 9312 9313 9314 9315 9316 | ** As well as the regular sqlite3changegroup_add() and ** sqlite3changegroup_output() functions, also available are the streaming ** versions sqlite3changegroup_add_strm() and sqlite3changegroup_output_strm(). */ int sqlite3changegroup_new(sqlite3_changegroup **pp); /* ** CAPI3REF: Add A Changeset To A Changegroup ** ** Add all changes within the changeset (or patchset) in buffer pData (size ** nData bytes) to the changegroup. ** ** If the buffer contains a patchset, then all prior calls to this function ** on the same changegroup object must also have specified patchsets. Or, if ** the buffer contains a changeset, so must have the earlier calls to this ** function. Otherwise, SQLITE_ERROR is returned and no changes are added ** to the changegroup. ** ** Rows within the changeset and changegroup are identified by the values in ** their PRIMARY KEY columns. A change in the changeset is considered to ** apply to the same row as a change already present in the changegroup if ** the two rows have the same primary key. ** ** Changes to rows that do not already appear in the changegroup are ** simply copied into it. Or, if both the new changeset and the changegroup ** contain changes that apply to a single row, the final contents of the ** changegroup depends on the type of each change, as follows: ** ** <table border=1 style="margin-left:8ex;margin-right:8ex"> ** <tr><th style="white-space:pre">Existing Change </th> ** <th style="white-space:pre">New Change </th> |
︙ | ︙ | |||
9167 9168 9169 9170 9171 9172 9173 9174 9175 9176 9177 9178 9179 9180 | ** final contents of the changegroup is undefined. ** ** If no error occurs, SQLITE_OK is returned. */ int sqlite3changegroup_add(sqlite3_changegroup*, int nData, void *pData); /* ** Obtain a buffer containing a changeset (or patchset) representing the ** current contents of the changegroup. If the inputs to the changegroup ** were themselves changesets, the output is a changeset. Or, if the ** inputs were patchsets, the output is also a patchset. ** ** As with the output of the sqlite3session_changeset() and ** sqlite3session_patchset() functions, all changes related to a single | > > | 9363 9364 9365 9366 9367 9368 9369 9370 9371 9372 9373 9374 9375 9376 9377 9378 | ** final contents of the changegroup is undefined. ** ** If no error occurs, SQLITE_OK is returned. */ int sqlite3changegroup_add(sqlite3_changegroup*, int nData, void *pData); /* ** CAPI3REF: Obtain A Composite Changeset From A Changegroup ** ** Obtain a buffer containing a changeset (or patchset) representing the ** current contents of the changegroup. If the inputs to the changegroup ** were themselves changesets, the output is a changeset. Or, if the ** inputs were patchsets, the output is also a patchset. ** ** As with the output of the sqlite3session_changeset() and ** sqlite3session_patchset() functions, all changes related to a single |
︙ | ︙ | |||
9195 9196 9197 9198 9199 9200 9201 | int sqlite3changegroup_output( sqlite3_changegroup*, int *pnData, /* OUT: Size of output buffer in bytes */ void **ppData /* OUT: Pointer to output buffer */ ); /* | | | 9393 9394 9395 9396 9397 9398 9399 9400 9401 9402 9403 9404 9405 9406 9407 | int sqlite3changegroup_output( sqlite3_changegroup*, int *pnData, /* OUT: Size of output buffer in bytes */ void **ppData /* OUT: Pointer to output buffer */ ); /* ** CAPI3REF: Delete A Changegroup Object */ void sqlite3changegroup_delete(sqlite3_changegroup*); /* ** CAPI3REF: Apply A Changeset To A Database ** ** Apply a changeset to a database. This function attempts to update the |
︙ | ︙ | |||
9920 9921 9922 9923 9924 9925 9926 | ** Applications may also register custom tokenizer types. A tokenizer ** is registered by providing fts5 with a populated instance of the ** following structure. All structure methods must be defined, setting ** any member of the fts5_tokenizer struct to NULL leads to undefined ** behaviour. The structure methods are expected to function as follows: ** ** xCreate: | | | 10118 10119 10120 10121 10122 10123 10124 10125 10126 10127 10128 10129 10130 10131 10132 | ** Applications may also register custom tokenizer types. A tokenizer ** is registered by providing fts5 with a populated instance of the ** following structure. All structure methods must be defined, setting ** any member of the fts5_tokenizer struct to NULL leads to undefined ** behaviour. The structure methods are expected to function as follows: ** ** xCreate: ** This function is used to allocate and initialize a tokenizer instance. ** A tokenizer instance is required to actually tokenize text. ** ** The first argument passed to this function is a copy of the (void*) ** pointer provided by the application when the fts5_tokenizer object ** was registered with FTS5 (the third argument to xCreateTokenizer()). ** The second and third arguments are an array of nul-terminated strings ** containing the tokenizer arguments, if any, specified following the |
︙ | ︙ | |||
10179 10180 10181 10182 10183 10184 10185 10186 | *************************************************************************/ #ifdef __cplusplus } /* end of the 'extern "C"' block */ #endif #endif /* _FTS5_H */ | < | 10377 10378 10379 10380 10381 10382 10383 10384 10385 | *************************************************************************/ #ifdef __cplusplus } /* end of the 'extern "C"' block */ #endif #endif /* _FTS5_H */ /******** End of fts5.h *********/ |
Changes to src/stash.c.
︙ | ︙ | |||
21 22 23 24 25 26 27 | #include <assert.h> /* ** SQL code to implement the tables needed by the stash. */ static const char zStashInit[] = | | | | | | 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 | #include <assert.h> /* ** SQL code to implement the tables needed by the stash. */ static const char zStashInit[] = @ CREATE TABLE IF NOT EXISTS localdb.stash( @ stashid INTEGER PRIMARY KEY, -- Unique stash identifier @ vid INTEGER, -- The baseline checkout for this stash @ comment TEXT, -- Comment for this stash. Or NULL @ ctime TIMESTAMP -- When the stash was created @ ); @ CREATE TABLE IF NOT EXISTS localdb.stashfile( @ stashid INTEGER REFERENCES stash, -- Stash that contains this file @ rid INTEGER, -- Baseline content in BLOB table or 0. @ isAdded BOOLEAN, -- True if this is an added file @ isRemoved BOOLEAN, -- True if this file is deleted @ isExec BOOLEAN, -- True if file is executable @ isLink BOOLEAN, -- True if file is a symlink @ origname TEXT, -- Original filename @ newname TEXT, -- New name for file at next check-in @ delta BLOB, -- Delta from baseline. Content if rid=0 @ PRIMARY KEY(newname, stashid) @ ); @ INSERT OR IGNORE INTO vvar(name, value) VALUES('stash-next', 1); ; /* ** Add zFName to the stash given by stashid. zFName might be the name of a ** file or a directory. If a directory, add all changed files contained |
︙ | ︙ | |||
194 195 196 197 198 199 200 | }else{ stash_add_file_or_dir(stashid, vid, g.zLocalRoot); } return stashid; } /* | | | | | 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 | }else{ stash_add_file_or_dir(stashid, vid, g.zLocalRoot); } return stashid; } /* ** Apply a stash to the current checkout. */ static void stash_apply(int stashid, int nConflict){ int vid; Stmt q; db_prepare(&q, "SELECT rid, isRemoved, isExec, isLink, origname, newname, delta" " FROM stashfile WHERE stashid=%d", stashid ); vid = db_lget_int("checkout",0); db_multi_exec("CREATE TEMP TABLE sfile(pathname TEXT PRIMARY KEY %s)", filename_collation()); while( db_step(&q)==SQLITE_ROW ){ int rid = db_column_int(&q, 0); int isRemoved = db_column_int(&q, 1); int isExec = db_column_int(&q, 2); int isLink = db_column_int(&q, 3); const char *zOrig = db_column_text(&q, 4); const char *zNew = db_column_text(&q, 5); char *zOPath = mprintf("%s%s", g.zLocalRoot, zOrig); char *zNPath = mprintf("%s%s", g.zLocalRoot, zNew); Blob delta; undo_save(zNew); blob_zero(&delta); if( rid==0 ){ db_multi_exec("INSERT OR IGNORE INTO sfile(pathname) VALUES(%Q)", zNew); db_ephemeral_blob(&q, 6, &delta); blob_write_to_file(&delta, zNPath); file_wd_setexe(zNPath, isExec); }else if( isRemoved ){ fossil_print("DELETE %s\n", zOrig); file_delete(zOPath); }else{ |
︙ | ︙ | |||
276 277 278 279 280 281 282 283 284 285 286 287 288 289 | blob_reset(&b); blob_reset(&disk); } blob_reset(&delta); if( fossil_strcmp(zOrig,zNew)!=0 ){ undo_save(zOrig); file_delete(zOPath); } } stash_add_files_in_sfile(vid); db_finalize(&q); if( nConflict ){ fossil_print( "WARNING: %d merge conflicts - see messages above for details.\n", | > > > > > | 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 | blob_reset(&b); blob_reset(&disk); } blob_reset(&delta); if( fossil_strcmp(zOrig,zNew)!=0 ){ undo_save(zOrig); file_delete(zOPath); db_multi_exec( "UPDATE vfile SET pathname='%q', origname='%q'" " WHERE pathname='%q' %s AND vid=%d", zNew, zOrig, zOrig, filename_collation(), vid ); } } stash_add_files_in_sfile(vid); db_finalize(&q); if( nConflict ){ fossil_print( "WARNING: %d merge conflicts - see messages above for details.\n", |
︙ | ︙ | |||
391 392 393 394 395 396 397 | /* ** If zStashId is non-NULL then interpret is as a stash number and ** return that number. Or throw a fatal error if it is not a valid ** stash number. If it is NULL, return the most recent stash or ** throw an error if the stash is empty. */ static int stash_get_id(const char *zStashId){ | | | | | < | | | | < | | | | | | | < > > | > > > > > > > | > > > > > | 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 | /* ** If zStashId is non-NULL then interpret is as a stash number and ** return that number. Or throw a fatal error if it is not a valid ** stash number. If it is NULL, return the most recent stash or ** throw an error if the stash is empty. */ static int stash_get_id(const char *zStashId){ int stashid; if( zStashId==0 ){ stashid = db_int(0, "SELECT max(stashid) FROM stash"); if( stashid==0 ) fossil_fatal("empty stash"); }else{ stashid = atoi(zStashId); if( !db_exists("SELECT 1 FROM stash WHERE stashid=%d", stashid) ){ fossil_fatal("no such stash: %s", zStashId); } } return stashid; } /* ** COMMAND: stash ** ** Usage: %fossil stash SUBCOMMAND ARGS... ** ** fossil stash ** fossil stash save ?-m|--comment COMMENT? ?FILES...? ** fossil stash snapshot ?-m|--comment COMMENT? ?FILES...? ** ** Save the current changes in the working tree as a new stash. ** Then revert the changes back to the last check-in. If FILES ** are listed, then only stash and revert the named files. The ** "save" verb can be omitted if and only if there are no other ** arguments. The "snapshot" verb works the same as "save" but ** omits the revert, keeping the checkout unchanged. ** ** fossil stash list|ls ?-v|--verbose? ?-W|--width <num>? ** ** List all changes sets currently stashed. Show information about ** individual files in each changeset if -v or --verbose is used. ** ** fossil stash show|cat ?STASHID? ?DIFF-OPTIONS? ** ** Show the contents of a stash. ** ** fossil stash pop ** fossil stash apply ?STASHID? ** ** Apply STASHID or the most recently create stash to the current ** working checkout. The "pop" command deletes that changeset from ** the stash after applying it but the "apply" command retains the ** changeset. ** ** fossil stash goto ?STASHID? ** ** Update to the baseline checkout for STASHID then apply the ** changes of STASHID. Keep STASHID so that it can be reused ** This command is undoable. ** ** fossil stash drop|rm ?STASHID? ?-a|--all? ** ** Forget everything about STASHID. Forget the whole stash if the ** -a|--all flag is used. Individual drops are undoable but -a|--all ** is not. ** ** fossil stash diff ?STASHID? ?DIFF-OPTIONS? ** fossil stash gdiff ?STASHID? ?DIFF-OPTIONS? ** ** Show diffs of the current working directory and what that ** directory would be if STASHID were applied. ** ** SUMMARY: ** fossil stash ** fossil stash save ?-m|--comment COMMENT? ?FILES...? ** fossil stash snapshot ?-m|--comment COMMENT? ?FILES...? ** fossil stash list|ls ?-v|--verbose? ?-W|--width <num>? ** fossil stash show|cat ?STASHID? ?DIFF-OPTIONS? ** fossil stash pop ** fossil stash apply|goto ?STASHID? ** fossil stash drop|rm ?STASHID? ?-a|--all? ** fossil stash diff ?STASHID? ?DIFF-OPTIONS? ** fossil stash gdiff ?STASHID? ?DIFF-OPTIONS? */ void stash_cmd(void){ const char *zCmd; int nCmd; int stashid = 0; int rc; undo_capture_command_line(); db_must_be_within_tree(); db_open_config(0, 0); db_begin_transaction(); db_multi_exec(zStashInit /*works-like:""*/); rc = db_exists("SELECT 1 FROM sqlite_master" " WHERE name='stashfile'" " AND sql GLOB '* PRIMARY KEY(origname, stashid)*'"); if( rc!=0 ){ db_multi_exec( "CREATE TABLE localdb.stashfile_tmp AS SELECT * FROM stashfile;" "DROP TABLE stashfile;" ); db_multi_exec(zStashInit /*works-like:""*/); db_multi_exec( "INSERT INTO stashfile SELECT * FROM stashfile_tmp;" "DROP TABLE stashfile_tmp;" ); } if( g.argc<=2 ){ zCmd = "save"; }else{ zCmd = g.argv[2]; } nCmd = strlen(zCmd); if( memcmp(zCmd, "save", nCmd)==0 ){ |
︙ | ︙ | |||
641 642 643 644 645 646 647 648 649 650 651 | || memcmp(zCmd, "gdiff", nCmd)==0 || memcmp(zCmd, "show", nCmd)==0 || memcmp(zCmd, "cat", nCmd)==0 ){ const char *zDiffCmd = 0; const char *zBinGlob = 0; int fIncludeBinary = 0; u64 diffFlags; if( find_option("tk",0,0)!=0 ){ db_close(0); | > < < < < < | < < < | | | 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 | || memcmp(zCmd, "gdiff", nCmd)==0 || memcmp(zCmd, "show", nCmd)==0 || memcmp(zCmd, "cat", nCmd)==0 ){ const char *zDiffCmd = 0; const char *zBinGlob = 0; int fIncludeBinary = 0; int fBaseline = zCmd[0]=='s' || zCmd[0]=='c'; u64 diffFlags; if( find_option("tk",0,0)!=0 ){ db_close(0); diff_tk(fBaseline ? "stash show" : "stash diff", 3); return; } if( find_option("internal","i",0)==0 ){ zDiffCmd = diff_command_external(memcmp(zCmd, "gdiff", nCmd)==0); } diffFlags = diff_options(); if( find_option("verbose","v",0)!=0 ) diffFlags |= DIFF_VERBOSE; if( g.argc>4 ) usage(mprintf("%s ?STASHID? ?DIFF-OPTIONS?", zCmd)); if( zDiffCmd ){ zBinGlob = diff_get_binary_glob(); fIncludeBinary = diff_include_binary_files(); } stashid = stash_get_id(g.argc==4 ? g.argv[3] : 0); stash_diff(stashid, zDiffCmd, zBinGlob, fBaseline, fIncludeBinary, diffFlags); }else if( memcmp(zCmd, "help", nCmd)==0 ){ g.argv[1] = "help"; g.argv[2] = "stash"; g.argc = 3; help_cmd(); }else { usage("SUBCOMMAND ARGS..."); } db_end_transaction(0); } |
Changes to src/stat.c.
︙ | ︙ | |||
59 60 61 62 63 64 65 | ** ** Show statistics and global information about the repository. */ void stat_page(void){ i64 t, fsize; int n, m; int szMax, szAvg; | < | | | | | | > > > | 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 | ** ** Show statistics and global information about the repository. */ void stat_page(void){ i64 t, fsize; int n, m; int szMax, szAvg; int brief; char zBuf[100]; const char *p; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } brief = P("brief")!=0; style_header("Repository Statistics"); style_adunit_config(ADUNIT_RIGHT_OK); if( g.perm.Admin ){ style_submenu_element("URLs", "urllist"); style_submenu_element("Schema", "repo_schema"); style_submenu_element("Web-Cache", "cachestat"); } style_submenu_element("Activity Reports", "reports"); style_submenu_element("SHA1 Collisions", "hash-collisions"); if( sqlite3_compileoption_used("ENABLE_DBSTAT_VTAB") ){ style_submenu_element("Table Sizes", "repo-tabsize"); } if( g.perm.Admin || g.perm.Setup || db_get_boolean("test_env_enable",0) ){ style_submenu_element("Environment", "test_env"); } @ <table class="label-value"> @ <tr><th>Repository Size:</th><td> fsize = file_size(g.zRepositoryName); bigSizeName(sizeof(zBuf), zBuf, fsize); @ %s(zBuf) @ </td></tr> |
︙ | ︙ | |||
115 116 117 118 119 120 121 122 123 124 125 126 127 128 | fsize /= 10; }else{ b = 1; } a = t/fsize; @ %d(a):%d(b) @ </td></tr> } @ <tr><th>Number Of Check-ins:</th><td> n = db_int(0, "SELECT count(*) FROM event WHERE type='ci' /*scan*/"); @ %d(n) @ </td></tr> @ <tr><th>Number Of Files:</th><td> n = db_int(0, "SELECT count(*) FROM filename /*scan*/"); | > > > > > > > > > > > > > > > > > > > > > | 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 | fsize /= 10; }else{ b = 1; } a = t/fsize; @ %d(a):%d(b) @ </td></tr> } if( db_table_exists("repository","unversioned") ){ Stmt q; char zStored[100]; db_prepare(&q, "SELECT count(*), sum(sz), sum(length(content))" " FROM unversioned" " WHERE length(hash)>1" ); if( db_step(&q)==SQLITE_ROW && (n = db_column_int(&q,0))>0 ){ sqlite3_int64 iSz, iStored; iSz = db_column_int64(&q,1); iStored = db_column_int64(&q,2); approxSizeName(sizeof(zBuf), zBuf, iSz); approxSizeName(sizeof(zStored), zStored, iStored); @ <tr><th>Unversioned Files:</th><td> @ %z(href("%R/uvlist"))%d(n) files</a>, @ total size %s(zBuf) uncompressed, %s(zStored) compressed @ </td></tr> } db_finalize(&q); } @ <tr><th>Number Of Check-ins:</th><td> n = db_int(0, "SELECT count(*) FROM event WHERE type='ci' /*scan*/"); @ %d(n) @ </td></tr> @ <tr><th>Number Of Files:</th><td> n = db_int(0, "SELECT count(*) FROM filename /*scan*/"); |
︙ | ︙ | |||
142 143 144 145 146 147 148 | @ <tr><th>Duration Of Project:</th><td> n = db_int(0, "SELECT julianday('now') - (SELECT min(mtime) FROM event)" " + 0.99"); @ %d(n) days or approximately %.2f(n/365.2425) years. @ </td></tr> p = db_get("project-code", 0); if( p ){ | | > > > > > > | < | | | | | | 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 | @ <tr><th>Duration Of Project:</th><td> n = db_int(0, "SELECT julianday('now') - (SELECT min(mtime) FROM event)" " + 0.99"); @ %d(n) days or approximately %.2f(n/365.2425) years. @ </td></tr> p = db_get("project-code", 0); if( p ){ @ <tr><th>Project ID:</th> @ <td>%h(p) %h(db_get("project-name",""))</td></tr> } p = db_get("parent-project-code", 0); if( p ){ @ <tr><th>Parent Project ID:</th> @ <td>%h(p) %h(db_get("parent-project-name",""))</td></tr> } /* @ <tr><th>Server ID:</th><td>%h(db_get("server-code",""))</td></tr> */ @ <tr><th>Fossil Version:</th><td> @ %h(MANIFEST_DATE) %h(MANIFEST_VERSION) @ (%h(RELEASE_VERSION)) [compiled using %h(COMPILER_NAME)] @ </td></tr> @ <tr><th>SQLite Version:</th><td>%.19s(sqlite3_sourceid()) @ [%.10s(&sqlite3_sourceid()[20])] (%s(sqlite3_libversion()))</td></tr> @ <tr><th>Schema Version:</th><td>%h(g.zAuxSchema)</td></tr> @ <tr><th>Repository Rebuilt:</th><td> @ %h(db_get_mtime("rebuilt","%Y-%m-%d %H:%M:%S","Never")) @ By Fossil %h(db_get("rebuilt","Unknown"))</td></tr> @ <tr><th>Database Stats:</th><td> @ %d(db_int(0, "PRAGMA repository.page_count")) pages, @ %d(db_int(0, "PRAGMA repository.page_size")) bytes/page, @ %d(db_int(0, "PRAGMA repository.freelist_count")) free pages, @ %s(db_text(0, "PRAGMA repository.encoding")), @ %s(db_text(0, "PRAGMA repository.journal_mode")) mode @ </td></tr> @ </table> style_footer(); } /* |
︙ | ︙ | |||
185 186 187 188 189 190 191 | ** --db-check Run a PRAGMA quick_check on the repository database ** --omit-version-info Omit the SQLite and Fossil version information */ void dbstat_cmd(void){ i64 t, fsize; int n, m; int szMax, szAvg; | < | 213 214 215 216 217 218 219 220 221 222 223 224 225 226 | ** --db-check Run a PRAGMA quick_check on the repository database ** --omit-version-info Omit the SQLite and Fossil version information */ void dbstat_cmd(void){ i64 t, fsize; int n, m; int szMax, szAvg; int brief; int omitVers; /* Omit Fossil and SQLite version information */ int dbCheck; /* True for the --db-check option */ char zBuf[100]; const int colWidth = -19 /* printf alignment/width for left column */; const char *p, *z; |
︙ | ︙ | |||
284 285 286 287 288 289 290 | MANIFEST_DATE, MANIFEST_VERSION, RELEASE_VERSION, COMPILER_NAME); fossil_print("%*s%.19s [%.10s] (%s)\n", colWidth, "sqlite-version:", sqlite3_sourceid(), &sqlite3_sourceid()[20], sqlite3_libversion()); } | < | | | | | | | | 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 | MANIFEST_DATE, MANIFEST_VERSION, RELEASE_VERSION, COMPILER_NAME); fossil_print("%*s%.19s [%.10s] (%s)\n", colWidth, "sqlite-version:", sqlite3_sourceid(), &sqlite3_sourceid()[20], sqlite3_libversion()); } fossil_print("%*s%d pages, %d bytes/pg, %d free pages, " "%s, %s mode\n", colWidth, "database-stats:", db_int(0, "PRAGMA repository.page_count"), db_int(0, "PRAGMA repository.page_size"), db_int(0, "PRAGMA repository.freelist_count"), db_text(0, "PRAGMA repository.encoding"), db_text(0, "PRAGMA repository.journal_mode")); if( dbCheck ){ fossil_print("%*s%s\n", colWidth, "database-check:", db_text(0, "PRAGMA quick_check(1)")); } } /* ** WEBPAGE: urllist ** ** Show ways in which this repository has been accessed */ void urllist_page(void){ Stmt q; int cnt; login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); return; } style_header("URLs and Checkouts"); style_adunit_config(ADUNIT_RIGHT_OK); style_submenu_element("Stat", "stat"); style_submenu_element("Schema", "repo_schema"); @ <div class="section">URLs</div> @ <table border="0" width='100%%'> db_prepare(&q, "SELECT substr(name,9), datetime(mtime,'unixepoch')" " FROM config WHERE name GLOB 'baseurl:*' ORDER BY 2 DESC"); cnt = 0; while( db_step(&q)==SQLITE_ROW ){ @ <tr><td width='100%%'>%h(db_column_text(&q,0))</td> |
︙ | ︙ | |||
359 360 361 362 363 364 365 | void repo_schema_page(void){ Stmt q; login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); return; } style_header("Repository Schema"); style_adunit_config(ADUNIT_RIGHT_OK); | | | > > > | | | 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 | void repo_schema_page(void){ Stmt q; login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); return; } style_header("Repository Schema"); style_adunit_config(ADUNIT_RIGHT_OK); style_submenu_element("Stat", "stat"); style_submenu_element("URLs", "urllist"); if( sqlite3_compileoption_used("ENABLE_DBSTAT_VTAB") ){ style_submenu_element("Table Sizes", "repo-tabsize"); } db_prepare(&q, "SELECT sql FROM repository.sqlite_master WHERE sql IS NOT NULL"); @ <pre> while( db_step(&q)==SQLITE_ROW ){ @ %h(db_column_text(&q, 0)); } @ </pre> db_finalize(&q); style_footer(); |
︙ | ︙ | |||
386 387 388 389 390 391 392 | sqlite3_int64 fsize; char zBuf[100]; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } style_header("Repository Table Sizes"); style_adunit_config(ADUNIT_RIGHT_OK); | | > > > < | | | | | < | < < | | | | < | | 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 | sqlite3_int64 fsize; char zBuf[100]; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } style_header("Repository Table Sizes"); style_adunit_config(ADUNIT_RIGHT_OK); style_submenu_element("Stat", "stat"); if( g.perm.Admin ){ style_submenu_element("Schema", "repo_schema"); } db_multi_exec( "CREATE TEMP TABLE trans(name TEXT PRIMARY KEY,tabname TEXT)WITHOUT ROWID;" "INSERT INTO trans(name,tabname)" " SELECT name, tbl_name FROM repository.sqlite_master;" "CREATE TEMP TABLE piechart(amt REAL, label TEXT);" "INSERT INTO piechart(amt,label)" " SELECT count(*), " " coalesce((SELECT tabname FROM trans WHERE trans.name=dbstat.name),name)" " FROM dbstat('repository')" " GROUP BY 2 ORDER BY 2;" ); nPageFree = db_int(0, "PRAGMA repository.freelist_count"); if( nPageFree>0 ){ db_multi_exec( "INSERT INTO piechart(amt,label) VALUES(%d,'freelist')", nPageFree ); } fsize = file_size(g.zRepositoryName); approxSizeName(sizeof(zBuf), zBuf, fsize); @ <h2>Repository Size: %s(zBuf)</h2> @ <center><svg width='800' height='500'> piechart_render(800,500,PIE_OTHER|PIE_PERCENT); @ </svg></center> if( g.localOpen ){ db_multi_exec( "DELETE FROM trans;" "INSERT INTO trans(name,tabname)" " SELECT name, tbl_name FROM localdb.sqlite_master;" "DELETE FROM piechart;" "INSERT INTO piechart(amt,label)" " SELECT count(*), " " coalesce((SELECT tabname FROM trans WHERE trans.name=dbstat.name),name)" " FROM dbstat('localdb')" " GROUP BY 2 ORDER BY 2;" ); nPageFree = db_int(0, "PRAGMA localdb.freelist_count"); if( nPageFree>0 ){ db_multi_exec( "INSERT INTO piechart(amt,label) VALUES(%d,'freelist')", nPageFree ); } fsize = file_size(g.zLocalDbName); |
︙ | ︙ |
Changes to src/statrep.c.
︙ | ︙ | |||
309 310 311 312 313 314 315 | @ <td colspan='2'>Yearly total: %d(nEventsPerYear)</td> @</tr> } @ </tbody></table> if(nEventTotal){ const char *zAvgLabel = includeMonth ? "month" : "year"; int nAvg = iterations ? (nEventTotal/iterations) : 0; | | | | 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 | @ <td colspan='2'>Yearly total: %d(nEventsPerYear)</td> @</tr> } @ </tbody></table> if(nEventTotal){ const char *zAvgLabel = includeMonth ? "month" : "year"; int nAvg = iterations ? (nEventTotal/iterations) : 0; @ <br /><div>Total events: %d(nEventTotal) @ <br />Average per active %s(zAvgLabel): %d(nAvg) @ </div> } if( !includeMonth ){ output_table_sorting_javascript("statsTable","tnx",-1); } } |
︙ | ︙ | |||
340 341 342 343 344 345 346 | "CREATE TEMP VIEW piechart(amt,label) AS" " SELECT count(*), ifnull(euser,user) FROM v_reports" " GROUP BY ifnull(euser,user) ORDER BY count(*) DESC;" ); if( db_int(0, "SELECT count(*) FROM piechart")>=2 ){ @ <center><svg width=700 height=400> piechart_render(700, 400, PIE_OTHER|PIE_PERCENT); | | | 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 | "CREATE TEMP VIEW piechart(amt,label) AS" " SELECT count(*), ifnull(euser,user) FROM v_reports" " GROUP BY ifnull(euser,user) ORDER BY count(*) DESC;" ); if( db_int(0, "SELECT count(*) FROM piechart")>=2 ){ @ <center><svg width=700 height=400> piechart_render(700, 400, PIE_OTHER|PIE_PERCENT); @ </svg></centre><hr /> } @ <table class='statistics-report-table-events' border='0' @ cellpadding='2' cellspacing='0' id='statsTable'> @ <thead><tr> @ <th>User</th> @ <th>Events</th> @ <th width='90%%'><!-- relative commits graph --></th> |
︙ | ︙ | |||
501 502 503 504 505 506 507 | " WHERE ifnull(coalesce(euser,user,'')=%Q,1)" " GROUP BY 2 ORDER BY cast(strftime('%%w', mtime) AS INT);" , zUserName ); if( db_int(0, "SELECT count(*) FROM piechart")>=2 ){ @ <center><svg width=700 height=400> piechart_render(700, 400, PIE_OTHER|PIE_PERCENT); | | | 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 | " WHERE ifnull(coalesce(euser,user,'')=%Q,1)" " GROUP BY 2 ORDER BY cast(strftime('%%w', mtime) AS INT);" , zUserName ); if( db_int(0, "SELECT count(*) FROM piechart")>=2 ){ @ <center><svg width=700 height=400> piechart_render(700, 400, PIE_OTHER|PIE_PERCENT); @ </svg></centre><hr /> } @ <table class='statistics-report-table-events' border='0' @ cellpadding='2' cellspacing='0' id='statsTable'> @ <thead><tr> @ <th>DoW</th> @ <th>Day</th> @ <th>Events</th> |
︙ | ︙ | |||
569 570 571 572 573 574 575 | " SELECT substr(date('now'),1,4) UNION ALL" " SELECT b-1 FROM a" " WHERE b>0+(SELECT substr(date(min(mtime)),1,4) FROM event)" ") SELECT b, b FROM a ORDER BY b DESC"); if( zYear==0 || strlen(zYear)!=4 ){ zYear = db_text("1970","SELECT substr(date('now'),1,4);"); } | | | 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 | " SELECT substr(date('now'),1,4) UNION ALL" " SELECT b-1 FROM a" " WHERE b>0+(SELECT substr(date(min(mtime)),1,4) FROM event)" ") SELECT b, b FROM a ORDER BY b DESC"); if( zYear==0 || strlen(zYear)!=4 ){ zYear = db_text("1970","SELECT substr(date('now'),1,4);"); } cgi_printf("<br />"); db_prepare(&q, "SELECT DISTINCT strftime('%%W',mtime) AS wk, " " count(*) AS n " " FROM v_reports " " WHERE %Q=substr(date(mtime),1,4) " " AND mtime < current_timestamp " " AND ifnull(coalesce(euser,user,'')=%Q,1)" |
︙ | ︙ | |||
631 632 633 634 635 636 637 | } cgi_printf("</td></tr>"); } db_finalize(&q); cgi_printf("</tbody></table>"); if(total){ int nAvg = iterations ? (total/iterations) : 0; | | | 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 | } cgi_printf("</td></tr>"); } db_finalize(&q); cgi_printf("</tbody></table>"); if(total){ int nAvg = iterations ? (total/iterations) : 0; cgi_printf("<br /><div>Total events: %d<br />" "Average per active week: %d</div>", total, nAvg); } output_table_sorting_javascript("statsTable","tnx",-1); } /* Report types |
︙ | ︙ | |||
704 705 706 707 708 709 710 | zUserName = P("user"); if( zUserName==0 ) zUserName = P("u"); if( zUserName && zUserName[0]==0 ) zUserName = 0; if( zView==0 ){ zView = "byuser"; cgi_replace_query_parameter("view","byuser"); } | | | | | | 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 | zUserName = P("user"); if( zUserName==0 ) zUserName = P("u"); if( zUserName && zUserName[0]==0 ) zUserName = 0; if( zView==0 ){ zView = "byuser"; cgi_replace_query_parameter("view","byuser"); } for(i=0; i<count(aViewType); i++){ if( fossil_strcmp(zView, aViewType[i].zVal)==0 ){ eType = aViewType[i].eType; break; } } if( eType!=RPT_NONE ){ int nView = 0; /* Slots used in azView[] */ for(i=0; i<count(aViewType); i++){ azView[nView++] = aViewType[i].zVal; azView[nView++] = aViewType[i].zName; } if( eType!=RPT_BYFILE ){ style_submenu_multichoice("type", count(azType)/2, azType, 0); } style_submenu_multichoice("view", nView/2, azView, 0); if( eType!=RPT_BYUSER ){ style_submenu_sql("user","User:", "SELECT '', 'All Users' UNION ALL " "SELECT x, x FROM (" " SELECT DISTINCT trim(coalesce(euser,user)) AS x FROM event %s" " ORDER BY 1 COLLATE nocase) WHERE x!=''", eType==RPT_BYFILE ? "WHERE type='ci'" : "" ); } } style_submenu_element("Stats", "%R/stat"); style_header("Activity Reports"); switch( eType ){ case RPT_BYYEAR: stats_report_by_month_year(0, 0, zUserName); break; case RPT_BYMONTH: stats_report_by_month_year(1, 0, zUserName); |
︙ | ︙ |
Changes to src/style.c.
︙ | ︙ | |||
28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 | ** structure and displayed below the main menu. ** ** Populate these structure with calls to ** ** style_submenu_element() ** style_submenu_entry() ** style_submenu_checkbox() ** style_submenu_multichoice() ** ** prior to calling style_footer(). The style_footer() routine ** will generate the appropriate HTML text just below the main ** menu. */ static struct Submenu { const char *zLabel; /* Button label */ | > > < | | | > | 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 | ** structure and displayed below the main menu. ** ** Populate these structure with calls to ** ** style_submenu_element() ** style_submenu_entry() ** style_submenu_checkbox() ** style_submenu_binary() ** style_submenu_multichoice() ** style_submenu_sql() ** ** prior to calling style_footer(). The style_footer() routine ** will generate the appropriate HTML text just below the main ** menu. */ static struct Submenu { const char *zLabel; /* Button label */ const char *zLink; /* Jump to this link when button is pressed */ } aSubmenu[30]; static int nSubmenu = 0; /* Number of buttons */ static struct SubmenuCtrl { const char *zName; /* Form query parameter */ const char *zLabel; /* Label. Might be NULL for FF_MULTI */ unsigned char eType; /* FF_ENTRY, FF_MULTI, FF_BINARY */ unsigned char isDisabled; /* True if this control is grayed out */ short int iSize; /* Width for FF_ENTRY. Count for FF_MULTI */ const char *const *azChoice;/* value/display pairs for FF_MULTI */ const char *zFalse; /* FF_BINARY label when false */ } aSubmenuCtrl[20]; static int nSubmenuCtrl = 0; #define FF_ENTRY 1 #define FF_MULTI 2 #define FF_BINARY 3 #define FF_CHECKBOX 4 /* ** Remember that the header has been generated. The footer is omitted ** if an error occurs before the header. */ static int headerHasBeenGenerated = 0; |
︙ | ︙ | |||
199 200 201 202 203 204 205 | @ gebi("a%d(i+1)").href="%s(aHref[i])"; } } for(i=0; i<nFormAction; i++){ @ gebi("form%d(i+1)").action="%s(aFormAction[i])"; } @ } | | | 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 | @ gebi("a%d(i+1)").href="%s(aHref[i])"; } } for(i=0; i<nFormAction; i++){ @ gebi("form%d(i+1)").action="%s(aFormAction[i])"; } @ } if( sqlite3_strglob("*Opera Mini/[1-9]*", PD("HTTP_USER_AGENT",""))==0 ){ /* Special case for Opera Mini, which executes JS server-side */ @ var isOperaMini = Object.prototype.toString.call(window.operamini) @ === "[object OperaMini]"; @ if( isOperaMini ){ @ setTimeout("setAllHrefs();",%d(nDelay)); @ } }else if( db_get_boolean("auto-hyperlink-ishuman",0) && g.isHuman ){ |
︙ | ︙ | |||
228 229 230 231 232 233 234 | } /* ** Add a new element to the submenu */ void style_submenu_element( const char *zLabel, | < | < | > > > > > > > > > > > > | | | 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 | } /* ** Add a new element to the submenu */ void style_submenu_element( const char *zLabel, const char *zLink, ... ){ va_list ap; assert( nSubmenu < count(aSubmenu) ); aSubmenu[nSubmenu].zLabel = zLabel; va_start(ap, zLink); aSubmenu[nSubmenu].zLink = vmprintf(zLink, ap); va_end(ap); nSubmenu++; } void style_submenu_entry( const char *zName, /* Query parameter name */ const char *zLabel, /* Label before the entry box */ int iSize, /* Size of the entry box */ int isDisabled /* True if disabled */ ){ assert( nSubmenuCtrl < count(aSubmenuCtrl) ); aSubmenuCtrl[nSubmenuCtrl].zName = zName; aSubmenuCtrl[nSubmenuCtrl].zLabel = zLabel; aSubmenuCtrl[nSubmenuCtrl].iSize = iSize; aSubmenuCtrl[nSubmenuCtrl].isDisabled = isDisabled; aSubmenuCtrl[nSubmenuCtrl].eType = FF_ENTRY; nSubmenuCtrl++; } void style_submenu_checkbox( const char *zName, /* Query parameter name */ const char *zLabel, /* Label to display after the checkbox */ int isDisabled /* True if disabled */ ){ assert( nSubmenuCtrl < count(aSubmenuCtrl) ); aSubmenuCtrl[nSubmenuCtrl].zName = zName; aSubmenuCtrl[nSubmenuCtrl].zLabel = zLabel; aSubmenuCtrl[nSubmenuCtrl].isDisabled = isDisabled; aSubmenuCtrl[nSubmenuCtrl].eType = FF_CHECKBOX; nSubmenuCtrl++; } void style_submenu_binary( const char *zName, /* Query parameter name */ const char *zTrue, /* Label to show when parameter is true */ const char *zFalse, /* Label to show when the parameter is false */ int isDisabled /* True if this control is disabled */ ){ assert( nSubmenuCtrl < count(aSubmenuCtrl) ); aSubmenuCtrl[nSubmenuCtrl].zName = zName; aSubmenuCtrl[nSubmenuCtrl].zLabel = zTrue; aSubmenuCtrl[nSubmenuCtrl].zFalse = zFalse; aSubmenuCtrl[nSubmenuCtrl].isDisabled = isDisabled; aSubmenuCtrl[nSubmenuCtrl].eType = FF_BINARY; nSubmenuCtrl++; } void style_submenu_multichoice( const char *zName, /* Query parameter name */ int nChoice, /* Number of options */ const char *const *azChoice,/* value/display pairs. 2*nChoice entries */ int isDisabled /* True if this control is disabled */ ){ assert( nSubmenuCtrl < count(aSubmenuCtrl) ); aSubmenuCtrl[nSubmenuCtrl].zName = zName; aSubmenuCtrl[nSubmenuCtrl].iSize = nChoice; aSubmenuCtrl[nSubmenuCtrl].azChoice = azChoice; aSubmenuCtrl[nSubmenuCtrl].isDisabled = isDisabled; aSubmenuCtrl[nSubmenuCtrl].eType = FF_MULTI; nSubmenuCtrl++; } |
︙ | ︙ | |||
398 399 400 401 402 403 404 405 406 407 408 409 410 411 | @ <!DOCTYPE html> if( g.thTrace ) Th_Trace("BEGIN_HEADER<br />\n", -1); /* Generate the header up through the main menu */ Th_Store("project_name", db_get("project-name","Unnamed Fossil Project")); Th_Store("title", zTitle); Th_Store("baseurl", g.zBaseURL); Th_Store("secureurl", login_wants_https_redirect()? g.zHttpsURL: g.zBaseURL); Th_Store("home", g.zTop); Th_Store("index_page", db_get("index-page","/home")); if( local_zCurrentPage==0 ) style_set_current_page("%T", g.zPath); Th_Store("current_page", local_zCurrentPage); | > | 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 | @ <!DOCTYPE html> if( g.thTrace ) Th_Trace("BEGIN_HEADER<br />\n", -1); /* Generate the header up through the main menu */ Th_Store("project_name", db_get("project-name","Unnamed Fossil Project")); Th_Store("project_description", db_get("project-description","")); Th_Store("title", zTitle); Th_Store("baseurl", g.zBaseURL); Th_Store("secureurl", login_wants_https_redirect()? g.zHttpsURL: g.zBaseURL); Th_Store("home", g.zTop); Th_Store("index_page", db_get("index-page","/home")); if( local_zCurrentPage==0 ) style_set_current_page("%T", g.zPath); Th_Store("current_page", local_zCurrentPage); |
︙ | ︙ | |||
521 522 523 524 525 526 527 | if( p->zLink==0 ){ @ <span class="label">%h(p->zLabel)</span> }else{ @ <a class="label" href="%h(p->zLink)">%h(p->zLabel)</a> } } } | < | | | | | | | | | < | > | > | | < | | > | < | < | | | | | | < | | < < | | < | | | > | < | | | | | | < | | < < < | > > > | < < | > > > | < | | | > > > > > | > > > | 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 | if( p->zLink==0 ){ @ <span class="label">%h(p->zLabel)</span> }else{ @ <a class="label" href="%h(p->zLink)">%h(p->zLabel)</a> } } } for(i=0; i<nSubmenuCtrl; i++){ const char *zQPN = aSubmenuCtrl[i].zName; const char *zDisabled = " disabled"; if( !aSubmenuCtrl[i].isDisabled ){ zDisabled = ""; cgi_tag_query_parameter(zQPN); } switch( aSubmenuCtrl[i].eType ){ case FF_ENTRY: @ <span class='submenuctrl'>\ @ %h(aSubmenuCtrl[i].zLabel)\ @ <input type='text' name='%s(zQPN)' value='%h(PD(zQPN, ""))' \ if( aSubmenuCtrl[i].iSize<0 ){ @ size='%d(-aSubmenuCtrl[i].iSize)' \ }else if( aSubmenuCtrl[i].iSize>0 ){ @ size='%d(aSubmenuCtrl[i].iSize)' \ @ maxlength='%d(aSubmenuCtrl[i].iSize)' \ } @ onchange='gebi("f01").submit();'%s(zDisabled)></span> break; case FF_MULTI: { int j; const char *zVal = P(zQPN); if( aSubmenuCtrl[i].zLabel ){ @ %h(aSubmenuCtrl[i].zLabel)\ } @ <select class='submenuctrl' size='1' name='%s(zQPN)' \ @ onchange='gebi("f01").submit();'%s(zDisabled)> for(j=0; j<aSubmenuCtrl[i].iSize*2; j+=2){ const char *zQPV = aSubmenuCtrl[i].azChoice[j]; @ <option value='%h(zQPV)'\ if( fossil_strcmp(zVal, zQPV)==0 ){ @ selected\ } @ >%h(aSubmenuCtrl[i].azChoice[j+1])</option> } @ </select> break; } case FF_BINARY: { int isTrue = PB(zQPN); @ <select class='submenuctrl' size='1' name='%s(zQPN)' \ @ onchange='gebi("f01").submit();'%s(zDisabled)> @ <option value='1'\ if( isTrue ){ @ selected\ } @ >%h(aSubmenuCtrl[i].zLabel)</option> @ <option value='0'\ if( !isTrue ){ @ selected\ } @ >%h(aSubmenuCtrl[i].zFalse)</option> @ </select> break; } case FF_CHECKBOX: @ <label class='submenuctrl'>\ @ <input type='checkbox' name='%s(zQPN)' value='1' \ if( PB(zQPN) ){ @ checked \ } @ onchange='gebi("f01").submit();'%s(zDisabled)>\ @ %h(aSubmenuCtrl[i].zLabel)</label> break; } } @ </div> if( nSubmenuCtrl ){ cgi_query_parameters_to_hidden(); cgi_tag_query_parameter(0); @ </form> |
︙ | ︙ | |||
701 702 703 704 705 706 707 | @ border-collapse: collapse; }, { "td.timelineTableCell", "the format for the timeline data cells", @ vertical-align: top; @ text-align: left; }, | | > | 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 | @ border-collapse: collapse; }, { "td.timelineTableCell", "the format for the timeline data cells", @ vertical-align: top; @ text-align: left; }, { "tr.timelineCurrent", "the format for the timeline data cell of the current checkout", @ padding: .1em .2em; @ border: 1px dashed #446979; @ box-shadow: 1px 1px 4px #888; }, { "tr.timelineSelected", "The row in the timeline table that contains the entry of interest", @ padding: .1em .2em; @ border: 2px solid lightgray; @ background-color: #ffc; @ box-shadow: 4px 4px 2px #888; |
︙ | ︙ | |||
1559 1560 1561 1562 1563 1564 1565 | "HTTP_ACCEPT", "HTTP_ACCEPT_CHARSET", "HTTP_ACCEPT_ENCODING", "HTTP_ACCEPT_LANGUAGE", "HTTP_CONNECTION", "HTTP_HOST", "HTTP_USER_AGENT", "HTTP_REFERER", "PATH_INFO", "PATH_TRANSLATED", "QUERY_STRING", "REMOTE_ADDR", "REMOTE_PORT", "REQUEST_METHOD", "REQUEST_URI", "SCRIPT_FILENAME", "SCRIPT_NAME", "SERVER_PROTOCOL", "HOME", "FOSSIL_HOME", "USERNAME", "USER", "FOSSIL_USER", "SQLITE_TMPDIR", "TMPDIR", | | > > > < | | < | | | 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 | "HTTP_ACCEPT", "HTTP_ACCEPT_CHARSET", "HTTP_ACCEPT_ENCODING", "HTTP_ACCEPT_LANGUAGE", "HTTP_CONNECTION", "HTTP_HOST", "HTTP_USER_AGENT", "HTTP_REFERER", "PATH_INFO", "PATH_TRANSLATED", "QUERY_STRING", "REMOTE_ADDR", "REMOTE_PORT", "REQUEST_METHOD", "REQUEST_URI", "SCRIPT_FILENAME", "SCRIPT_NAME", "SERVER_PROTOCOL", "HOME", "FOSSIL_HOME", "USERNAME", "USER", "FOSSIL_USER", "SQLITE_TMPDIR", "TMPDIR", "TEMP", "TMP", "FOSSIL_VFS", "FOSSIL_FORCE_TICKET_MODERATION", "FOSSIL_FORCE_WIKI_MODERATION", "FOSSIL_TCL_PATH", "TH1_DELETE_INTERP", "TH1_ENABLE_DOCS", "TH1_ENABLE_HOOKS", "TH1_ENABLE_TCL", "REMOTE_HOST" }; login_check_credentials(); if( !g.perm.Admin && !g.perm.Setup && !db_get_boolean("test_env_enable",0) ){ login_needed(0); return; } for(i=0; i<count(azCgiVars); i++) (void)P(azCgiVars[i]); style_header("Environment Test"); showAll = PB("showall"); style_submenu_checkbox("showall", "Cookies", 0); style_submenu_element("Stats", "%R/stat"); #if !defined(_WIN32) @ uid=%d(getuid()), gid=%d(getgid())<br /> #endif @ g.zBaseURL = %h(g.zBaseURL)<br /> @ g.zHttpsURL = %h(g.zHttpsURL)<br /> @ g.zTop = %h(g.zTop)<br /> @ g.zPath = %h(g.zPath)<br /> |
︙ | ︙ | |||
1600 1601 1602 1603 1604 1605 1606 | } zCap[i] = 0; if( i>0 ){ @ anonymous-adds = %s(zCap)<br /> } @ g.zRepositoryName = %h(g.zRepositoryName)<br /> @ load_average() = %f(load_average())<br /> | | | | 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 | } zCap[i] = 0; if( i>0 ){ @ anonymous-adds = %s(zCap)<br /> } @ g.zRepositoryName = %h(g.zRepositoryName)<br /> @ load_average() = %f(load_average())<br /> @ <hr /> P("HTTP_USER_AGENT"); cgi_print_all(showAll); if( showAll && blob_size(&g.httpHeader)>0 ){ @ <hr /> @ <pre> @ %h(blob_str(&g.httpHeader)) @ </pre> } if( g.perm.Setup ){ const char *zRedir = P("redirect"); if( zRedir ) cgi_redirect(zRedir); |
︙ | ︙ |
Changes to src/sync.c.
︙ | ︙ | |||
75 76 77 78 79 80 81 | url_enable_proxy("via proxy: "); rc = client_sync(flags, configSync, 0); return rc; } /* ** This routine will try a number of times to perform autosync with a | | > > > > | > > > > > > > > > > > > > | > > > > > > > > > > > > > > > > < < < < < < | > > > > > | 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 | url_enable_proxy("via proxy: "); rc = client_sync(flags, configSync, 0); return rc; } /* ** This routine will try a number of times to perform autosync with a ** 0.5 second sleep between attempts. ** ** Return zero on success and non-zero on a failure. If failure occurs ** and doPrompt flag is true, ask the user if they want to continue, and ** if they answer "yes" then return zero in spite of the failure. */ int autosync_loop(int flags, int nTries, int doPrompt){ int n = 0; int rc = 0; if( (flags & (SYNC_PUSH|SYNC_PULL))==(SYNC_PUSH|SYNC_PULL) && db_get_boolean("uv-sync",0) ){ flags |= SYNC_UNVERSIONED; } while( (n==0 || n<nTries) && (rc=autosync(flags)) ){ if( rc ){ if( ++n<nTries ){ fossil_warning("Autosync failed, making another attempt."); sqlite3_sleep(500); }else{ fossil_warning("Autosync failed."); } } } if( rc && doPrompt ){ Blob ans; char cReply; prompt_user("continue in spite of sync failure (y/N)? ", &ans); cReply = blob_str(&ans)[0]; if( cReply=='y' || cReply=='Y' ) rc = 0; blob_reset(&ans); } return rc; } /* ** This routine processes the command-line argument for push, pull, ** and sync. If a command-line argument is given, that is the URL ** of a server to sync against. If no argument is given, use the ** most recently synced URL. Remember the current URL for next time. */ static void process_sync_args( unsigned *pConfigFlags, /* Write configuration flags here */ unsigned *pSyncFlags, /* Write sync flags here */ int uvOnly /* Special handling flags for UV sync */ ){ const char *zUrl = 0; const char *zHttpAuth = 0; unsigned configSync = 0; unsigned urlFlags = URL_REMEMBER | URL_PROMPT_PW; int urlOptional = 0; if( find_option("autourl",0,0)!=0 ){ urlOptional = 1; urlFlags = 0; } zHttpAuth = find_option("httpauth","B",1); if( find_option("once",0,0)!=0 ) urlFlags &= ~URL_REMEMBER; if( (*pSyncFlags) & SYNC_FROMPARENT ) urlFlags &= ~URL_REMEMBER; if( !uvOnly ){ if( find_option("private",0,0)!=0 ){ *pSyncFlags |= SYNC_PRIVATE; } /* The --verily option to sync, push, and pull forces extra igot cards ** to be exchanged. This can overcome malfunctions in the sync protocol. */ if( find_option("verily",0,0)!=0 ){ *pSyncFlags |= SYNC_RESYNC; } } if( find_option("private",0,0)!=0 ){ *pSyncFlags |= SYNC_PRIVATE; } if( find_option("verbose","v",0)!=0 ){ *pSyncFlags |= SYNC_VERBOSE; } url_proxy_options(); clone_ssh_find_options(); if( !uvOnly ) db_find_and_open_repository(0, 0); db_open_config(0, 0); if( g.argc==2 ){ if( db_get_boolean("auto-shun",1) ) configSync = CONFIGSET_SHUN; }else if( g.argc==3 ){ zUrl = g.argv[2]; } if( ((*pSyncFlags) & (SYNC_PUSH|SYNC_PULL))==(SYNC_PUSH|SYNC_PULL) && db_get_boolean("uv-sync",0) ){ *pSyncFlags |= SYNC_UNVERSIONED; } if( urlFlags & URL_REMEMBER ){ clone_ssh_db_set_options(); } url_parse(zUrl, urlFlags); remember_or_get_http_auth(zHttpAuth, urlFlags & URL_REMEMBER, zUrl); url_remember(); |
︙ | ︙ | |||
163 164 165 166 167 168 169 | /* ** COMMAND: pull ** ** Usage: %fossil pull ?URL? ?options? ** ** Pull all sharable changes from a remote repository into the local repository. ** Sharable changes include public check-ins, and wiki, ticket, and tech-note | | > > > > | | | | | | | > > > > | > > > > > > > > > > > > | 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 | /* ** COMMAND: pull ** ** Usage: %fossil pull ?URL? ?options? ** ** Pull all sharable changes from a remote repository into the local repository. ** Sharable changes include public check-ins, and wiki, ticket, and tech-note ** edits. Add the --private option to pull private branches. Use the ** "configuration pull" command to pull website configuration details. ** ** If URL is not specified, then the URL from the most recent clone, push, ** pull, remote-url, or sync command is used. See "fossil help clone" for ** details on the URL formats. ** ** Options: ** ** -B|--httpauth USER:PASS Credentials for the simple HTTP auth protocol, ** if required by the remote website ** --from-parent-project Pull content from the parent project ** --ipv4 Use only IPv4, not IPv6 ** --once Do not remember URL for subsequent syncs ** --proxy PROXY Use the specified HTTP proxy ** --private Pull private branches too ** -R|--repository REPO Repository to pull into ** --ssl-identity FILE Local SSL credentials, if requested by remote ** --ssh-command SSH Use SSH as the "ssh" command ** -v|--verbose Additional (debugging) output ** --verily Exchange extra information with the remote ** to ensure no content is overlooked ** ** See also: clone, config pull, push, remote-url, sync */ void pull_cmd(void){ unsigned configFlags = 0; unsigned syncFlags = SYNC_PULL; if( find_option("from-parent-project",0,0)!=0 ){ syncFlags |= SYNC_FROMPARENT; } process_sync_args(&configFlags, &syncFlags, 0); /* We should be done with options.. */ verify_all_options(); client_sync(syncFlags, configFlags, 0); } /* ** COMMAND: push ** ** Usage: %fossil push ?URL? ?options? ** ** Push all sharable changes from the local repository to a remote repository. ** Sharable changes include public check-ins, and wiki, ticket, and tech-note ** edits. Use --private to also push private branches. Use the ** "configuration push" command to push website configuration details. ** ** If URL is not specified, then the URL from the most recent clone, push, ** pull, remote-url, or sync command is used. See "fossil help clone" for ** details on the URL formats. ** ** Options: ** ** -B|--httpauth USER:PASS Credentials for the simple HTTP auth protocol, ** if required by the remote website ** --ipv4 Use only IPv4, not IPv6 ** --once Do not remember URL for subsequent syncs ** --proxy PROXY Use the specified HTTP proxy ** --private Push private branches too ** -R|--repository REPO Repository to pull into ** --ssl-identity FILE Local SSL credentials, if requested by remote ** --ssh-command SSH Use SSH as the "ssh" command ** -v|--verbose Additional (debugging) output ** --verily Exchange extra information with the remote ** to ensure no content is overlooked ** ** See also: clone, config push, pull, remote-url, sync */ void push_cmd(void){ unsigned configFlags = 0; unsigned syncFlags = SYNC_PUSH; process_sync_args(&configFlags, &syncFlags, 0); /* We should be done with options.. */ verify_all_options(); if( db_get_boolean("dont-push",0) ){ fossil_fatal("pushing is prohibited: the 'dont-push' option is set"); } client_sync(syncFlags, 0, 0); } /* ** COMMAND: sync ** ** Usage: %fossil sync ?URL? ?options? ** ** Synchronize all sharable changes between the local repository and a ** remote repository. Sharable changes include public check-ins and ** edits to wiki pages, tickets, and technical notes. ** ** If URL is not specified, then the URL from the most recent clone, push, ** pull, remote-url, or sync command is used. See "fossil help clone" for ** details on the URL formats. ** ** Options: ** ** -B|--httpauth USER:PASS Credentials for the simple HTTP auth protocol, ** if required by the remote website ** --ipv4 Use only IPv4, not IPv6 ** --once Do not remember URL for subsequent syncs ** --proxy PROXY Use the specified HTTP proxy ** --private Sync private branches too ** -R|--repository REPO Repository to pull into ** --ssl-identity FILE Local SSL credentials, if requested by remote ** --ssh-command SSH Use SSH as the "ssh" command ** -u|--unversioned Also sync unversioned content ** -v|--verbose Additional (debugging) output ** --verily Exchange extra information with the remote ** to ensure no content is overlooked ** ** See also: clone, pull, push, remote-url */ void sync_cmd(void){ unsigned configFlags = 0; unsigned syncFlags = SYNC_PUSH|SYNC_PULL; if( find_option("unversioned","u",0)!=0 ){ syncFlags |= SYNC_UNVERSIONED; } process_sync_args(&configFlags, &syncFlags, 0); /* We should be done with options.. */ verify_all_options(); if( db_get_boolean("dont-push",0) ) syncFlags &= ~SYNC_PUSH; client_sync(syncFlags, configFlags, 0); if( (syncFlags & SYNC_PUSH)==0 ){ fossil_warning("pull only: the 'dont-push' option is set"); } } /* ** Handle the "fossil unversioned sync" and "fossil unversioned revert" ** commands. */ void sync_unversioned(unsigned syncFlags){ unsigned configFlags = 0; (void)find_option("uv-noop",0,0); process_sync_args(&configFlags, &syncFlags, 1); verify_all_options(); client_sync(syncFlags, 0, 0); } /* ** COMMAND: remote-url ** ** Usage: %fossil remote-url ?URL|off? ** ** Query and/or change the default server URL used by the "pull", "push", |
︙ | ︙ |
Changes to src/tag.c.
︙ | ︙ | |||
146 147 148 149 150 151 152 153 154 155 156 157 158 159 | id = db_last_insert_rowid(); } return id; } /* ** Insert a tag into the database. */ int tag_insert( const char *zTag, /* Name of the tag (w/o the "+" or "-" prefix */ int tagtype, /* 0:cancel 1:singleton 2:propagated */ const char *zValue, /* Value if the tag is really a property */ int srcId, /* Artifact that contains this tag */ double mtime, /* Timestamp. Use default if <=0.0 */ | > > > | 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 | id = db_last_insert_rowid(); } return id; } /* ** Insert a tag into the database. ** ** Also translate zTag into a tagid and return the tagid. (In other words ** if zTag is "bgcolor" then return TAG_BGCOLOR.) */ int tag_insert( const char *zTag, /* Name of the tag (w/o the "+" or "-" prefix */ int tagtype, /* 0:cancel 1:singleton 2:propagated */ const char *zValue, /* Value if the tag is really a property */ int srcId, /* Artifact that contains this tag */ double mtime, /* Timestamp. Use default if <=0.0 */ |
︙ | ︙ | |||
226 227 228 229 230 231 232 233 234 235 236 237 238 239 | if( tagid==TAG_DATE ){ db_multi_exec("UPDATE event " " SET mtime=julianday(%Q)," " omtime=coalesce(omtime,mtime)" " WHERE objid=%d", zValue, rid); } if( tagtype==1 ) tagtype = 0; tag_propagate(rid, tagid, tagtype, rid, zValue, mtime); return tagid; } /* | > > > | 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 | if( tagid==TAG_DATE ){ db_multi_exec("UPDATE event " " SET mtime=julianday(%Q)," " omtime=coalesce(omtime,mtime)" " WHERE objid=%d", zValue, rid); } if( tagid==TAG_PARENT && tagtype==1 ){ manifest_reparent_checkin(rid, zValue); } if( tagtype==1 ) tagtype = 0; tag_propagate(rid, tagid, tagtype, rid, zValue, mtime); return tagid; } /* |
︙ | ︙ | |||
271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 | g.markPrivate = content_is_private(rid); zValue = g.argc==5 ? g.argv[4] : 0; db_begin_transaction(); tag_insert(zTag, tagtype, zValue, -1, 0.0, rid); db_end_transaction(0); } /* ** Add a control record to the repository that either creates ** or cancels a tag. */ void tag_add_artifact( const char *zPrefix, /* Prefix to prepend to tag name */ const char *zTagname, /* The tag to add or cancel */ const char *zObjName, /* Name of object attached to */ const char *zValue, /* Value for the tag. Might be NULL */ int tagtype, /* 0:cancel 1:singleton 2:propagated */ const char *zDateOvrd, /* Override date string */ const char *zUserOvrd /* Override user name */ ){ int rid; int nrid; char *zDate; Blob uuid; Blob ctrl; Blob cksum; static const char zTagtype[] = { '-', '+', '*' }; assert( tagtype>=0 && tagtype<=2 ); user_select(); blob_zero(&uuid); blob_append(&uuid, zObjName, -1); if( name_to_uuid(&uuid, 9, "*") ){ fossil_fatal("%s", g.zErrMsg); return; | > > > > > > > > > > > > > > > > > > | 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 | g.markPrivate = content_is_private(rid); zValue = g.argc==5 ? g.argv[4] : 0; db_begin_transaction(); tag_insert(zTag, tagtype, zValue, -1, 0.0, rid); db_end_transaction(0); } /* ** OR this value into the tagtype argument to tag_add_artifact to ** cause the tag to be displayed on standard output rather than be ** inserted. Used for --dryrun options and debugging. */ #if INTERFACE #define TAG_ADD_DRYRUN 0x04 #endif /* ** Add a control record to the repository that either creates ** or cancels a tag. ** ** tagtype should normally be 0, 1, or 2. But if the TAG_ADD_DRYRUN bit ** is also set, then simply print the text of the tag on standard output ** (for testing purposes) rather than create the tag. */ void tag_add_artifact( const char *zPrefix, /* Prefix to prepend to tag name */ const char *zTagname, /* The tag to add or cancel */ const char *zObjName, /* Name of object attached to */ const char *zValue, /* Value for the tag. Might be NULL */ int tagtype, /* 0:cancel 1:singleton 2:propagated */ const char *zDateOvrd, /* Override date string */ const char *zUserOvrd /* Override user name */ ){ int rid; int nrid; char *zDate; Blob uuid; Blob ctrl; Blob cksum; static const char zTagtype[] = { '-', '+', '*' }; int dryRun = 0; if( tagtype & TAG_ADD_DRYRUN ){ tagtype &= ~TAG_ADD_DRYRUN; dryRun = 1; } assert( tagtype>=0 && tagtype<=2 ); user_select(); blob_zero(&uuid); blob_append(&uuid, zObjName, -1); if( name_to_uuid(&uuid, 9, "*") ){ fossil_fatal("%s", g.zErrMsg); return; |
︙ | ︙ | |||
325 326 327 328 329 330 331 | blob_appendf(&ctrl, " %F\n", zValue); }else{ blob_appendf(&ctrl, "\n"); } blob_appendf(&ctrl, "U %F\n", zUserOvrd ? zUserOvrd : login_name()); md5sum_blob(&ctrl, &cksum); blob_appendf(&ctrl, "Z %b\n", &cksum); | > > > > | | > > | | | | > > | | > | | 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 | blob_appendf(&ctrl, " %F\n", zValue); }else{ blob_appendf(&ctrl, "\n"); } blob_appendf(&ctrl, "U %F\n", zUserOvrd ? zUserOvrd : login_name()); md5sum_blob(&ctrl, &cksum); blob_appendf(&ctrl, "Z %b\n", &cksum); if( dryRun ){ fossil_print("%s", blob_str(&ctrl)); blob_reset(&ctrl); }else{ nrid = content_put(&ctrl); manifest_crosslink(nrid, &ctrl, MC_PERMIT_HOOKS); } assert( blob_is_reset(&ctrl) ); manifest_to_disk(rid); } /* ** COMMAND: tag ** ** Usage: %fossil tag SUBCOMMAND ... ** ** Run various subcommands to control tags and properties. ** ** %fossil tag add ?OPTIONS? TAGNAME CHECK-IN ?VALUE? ** ** Add a new tag or property to CHECK-IN. The tag will ** be usable instead of a CHECK-IN in commands such as ** update and merge. If the --propagate flag is present, ** the tag value propagates to all descendants of CHECK-IN ** ** Options: ** --raw Raw tag name. ** --propagate Propagating tag. ** --date-override DATETIME Set date and time added. ** --user-override USER Name USER when adding the tag. ** --dryrun|-n Display the tag text, but to not ** actually insert it into the database. ** ** The --date-override and --user-override options support ** importing history from other SCM systems. DATETIME has ** the form 'YYYY-MMM-DD HH:MM:SS'. ** ** %fossil tag cancel ?--raw? TAGNAME CHECK-IN ** ** Remove the tag TAGNAME from CHECK-IN, and also remove ** the propagation of the tag to any descendants. Use the ** the --dryrun or -n options to see what would have happened. ** ** %fossil tag find ?OPTIONS? TAGNAME ** ** List all objects that use TAGNAME. TYPE can be "ci" for ** check-ins or "e" for events. The limit option limits the number ** of results to the given value. ** ** Options: ** --raw Raw tag name. ** -t|--type TYPE One of "ci", or "e". ** -n|--limit N Limit to N results. ** ** %fossil tag list|ls ?--raw? ?CHECK-IN? ** |
︙ | ︙ | |||
414 415 416 417 418 419 420 421 422 423 | n = strlen(g.argv[2]); if( n==0 ){ goto tag_cmd_usage; } if( strncmp(g.argv[2],"add",n)==0 ){ char *zValue; const char *zDateOvrd = find_option("date-override",0,1); const char *zUserOvrd = find_option("user-override",0,1); if( g.argc!=5 && g.argc!=6 ){ | > > | | > > | | | 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 | n = strlen(g.argv[2]); if( n==0 ){ goto tag_cmd_usage; } if( strncmp(g.argv[2],"add",n)==0 ){ char *zValue; int dryRun = 0; const char *zDateOvrd = find_option("date-override",0,1); const char *zUserOvrd = find_option("user-override",0,1); if( find_option("dryrun","n",0)!=0 ) dryRun = TAG_ADD_DRYRUN; if( g.argc!=5 && g.argc!=6 ){ usage("add ?options? TAGNAME CHECK-IN ?VALUE?"); } zValue = g.argc==6 ? g.argv[5] : 0; db_begin_transaction(); tag_add_artifact(zPrefix, g.argv[3], g.argv[4], zValue, 1+fPropagate+dryRun,zDateOvrd,zUserOvrd); db_end_transaction(0); }else if( strncmp(g.argv[2],"branch",n)==0 ){ fossil_fatal("the \"fossil tag branch\" command is discontinued\n" "Use the \"fossil branch new\" command instead."); }else if( strncmp(g.argv[2],"cancel",n)==0 ){ int dryRun = 0; if( find_option("dryrun","n",0)!=0 ) dryRun = TAG_ADD_DRYRUN; if( g.argc!=5 ){ usage("cancel ?options? TAGNAME CHECK-IN"); } db_begin_transaction(); tag_add_artifact(zPrefix, g.argv[3], g.argv[4], 0, dryRun, 0, 0); db_end_transaction(0); }else if( strncmp(g.argv[2],"find",n)==0 ){ Stmt q; const char *zType = find_option("type","t",1); Blob sql = empty_blob; |
︙ | ︙ | |||
544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 | /* Cleanup */ return; tag_cmd_usage: usage("add|cancel|find|list ..."); } /* ** WEBPAGE: taglist ** ** List all non-propagating symbolic tags. */ void taglist_page(void){ Stmt q; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); } login_anonymous_available(); style_header("Tags"); style_adunit_config(ADUNIT_RIGHT_OK); | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 | /* Cleanup */ return; tag_cmd_usage: usage("add|cancel|find|list ..."); } /* ** COMMAND: reparent* ** ** Usage: %fossil reparent [OPTIONS] CHECK-IN PARENT .... ** ** Create a "parent" tag that causes CHECK-IN to be interpreted as a ** child of PARENT. If multiple PARENTs are listed, then the first is ** the primary parent and others are merge ancestors. ** ** This is an experts-only command. It is used to patch up a repository ** that has been damaged by a shun or that has been pieced together from ** two or more separate repositories. You should never need to reparent ** during normal operations. ** ** Reparenting is accomplished by adding a parent tag. So to undo the ** reparenting operation, simply delete the tag. ** ** --test Make database entries but do not add the tag artifact. ** So the reparent operation will be undone by the next ** "fossil rebuild" command. ** --dryrun | -n Print the tag that would have been created but do not ** actually change the database in any way. */ void reparent_cmd(void){ int bTest = find_option("test","",0)!=0; int rid; int i; Blob value; char *zUuid; int dryRun = 0; if( find_option("dryrun","n",0)!=0 ) dryRun = TAG_ADD_DRYRUN; db_find_and_open_repository(0, 0); verify_all_options(); if( g.argc<4 ){ usage("reparent [OPTIONS] PARENT ..."); } rid = name_to_typed_rid(g.argv[2], "ci"); blob_init(&value, 0, 0); for(i=3; i<g.argc; i++){ int pid = name_to_typed_rid(g.argv[i], "ci"); if( i>3 ) blob_append(&value, " ", 1); zUuid = rid_to_uuid(pid); blob_append(&value, zUuid, UUID_SIZE); fossil_free(zUuid); } if( bTest && !dryRun ){ tag_insert("parent", 1, blob_str(&value), -1, 0.0, rid); }else{ zUuid = rid_to_uuid(rid); tag_add_artifact("","parent",zUuid,blob_str(&value),1|dryRun,0,0); } } /* ** WEBPAGE: taglist ** ** List all non-propagating symbolic tags. */ void taglist_page(void){ Stmt q; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); } login_anonymous_available(); style_header("Tags"); style_adunit_config(ADUNIT_RIGHT_OK); style_submenu_element("Timeline", "tagtimeline"); @ <h2>Non-propagating tags:</h2> db_prepare(&q, "SELECT substr(tagname,5)" " FROM tag" " WHERE EXISTS(SELECT 1 FROM tagxref" " WHERE tagid=tag.tagid" " AND tagtype=1)" |
︙ | ︙ | |||
599 600 601 602 603 604 605 | void tagtimeline_page(void){ Stmt q; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } style_header("Tagged Check-ins"); | | | 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 | void tagtimeline_page(void){ Stmt q; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } style_header("Tagged Check-ins"); style_submenu_element("List", "taglist"); login_anonymous_available(); @ <h2>Check-ins with non-propagating tags:</h2> db_prepare(&q, "%s AND blob.rid IN (SELECT rid FROM tagxref" " WHERE tagtype=1 AND srcid>0" " AND tagid IN (SELECT tagid FROM tag " " WHERE tagname GLOB 'sym-*'))" |
︙ | ︙ |
Changes to src/tar.c.
︙ | ︙ | |||
71 72 73 74 75 76 77 | db_multi_exec( "CREATE TEMP TABLE dir(name UNIQUE);" ); } /* | | | | 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 | db_multi_exec( "CREATE TEMP TABLE dir(name UNIQUE);" ); } /* ** Verify that all characters in 'zName' are in the ** ISO646 (=ASCII) character set. */ static int is_iso646_name( const char *zName, /* file path */ int nName /* path length */ ){ int i; for(i = 0; i < nName; i++){ unsigned char c = (unsigned char)zName[i]; if( c>0x7e ) return 0; } return 1; } /* ** copy string pSrc into pDst, truncating or padding with 0 if necessary */ static void padded_copy( char *pDest, int nDest, const char *pSrc, int nSrc ){ |
︙ | ︙ | |||
444 445 446 447 448 449 450 | } tar_finish(&zip); blob_write_to_file(&zip, g.argv[2]); } /* ** Given the RID for a check-in, construct a tarball containing | | > | | > > > > > > | > > > > > > > > > > > > > > > > > > > | > > | | > > | > > | | > > > | | | | | | | > > > > > > > > > > > > > > | | 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 | } tar_finish(&zip); blob_write_to_file(&zip, g.argv[2]); } /* ** Given the RID for a check-in, construct a tarball containing ** all files in that check-in that match pGlob (or all files if ** pGlob is NULL). ** ** If RID is for an object that is not a real manifest, then the ** resulting tarball contains a single file which is the RID ** object. pInclude and pExclude are ignored in this case. ** ** If the RID object does not exist in the repository, then ** pTar is zeroed. ** ** zDir is a "synthetic" subdirectory which all files get ** added to as part of the tarball. It may be 0 or an empty string, in ** which case it is ignored. The intention is to create a tarball which ** politely expands into a subdir instead of filling your current dir ** with source files. For example, pass a UUID or "ProjectName". ** */ void tarball_of_checkin( int rid, /* The RID of the checkin from which to form a tarball */ Blob *pTar, /* Write the tarball into this blob */ const char *zDir, /* Directory prefix for all file added to tarball */ Glob *pInclude, /* Only add files matching this pattern */ Glob *pExclude /* Exclude files matching this pattern */ ){ Blob mfile, hash, file; Manifest *pManifest; ManifestFile *pFile; Blob filename; int nPrefix; char *zName = 0; unsigned int mTime; content_get(rid, &mfile); if( blob_size(&mfile)==0 ){ blob_zero(pTar); return; } blob_zero(&hash); blob_zero(&filename); if( zDir && zDir[0] ){ blob_appendf(&filename, "%s/", zDir); } nPrefix = blob_size(&filename); pManifest = manifest_get(rid, CFTYPE_MANIFEST, 0); if( pManifest ){ int flg, eflg = 0; mTime = (pManifest->rDate - 2440587.5)*86400.0; tar_begin(mTime); flg = db_get_manifest_setting(); if( flg ){ /* eflg is the effective flags, taking include/exclude into account */ if( (pInclude==0 || glob_match(pInclude, "manifest")) && !glob_match(pExclude, "manifest") && (flg & MFESTFLG_RAW) ){ eflg |= MFESTFLG_RAW; } if( (pInclude==0 || glob_match(pInclude, "manifest.uuid")) && !glob_match(pExclude, "manifest.uuid") && (flg & MFESTFLG_UUID) ){ eflg |= MFESTFLG_UUID; } if( (pInclude==0 || glob_match(pInclude, "manifest.tags")) && !glob_match(pExclude, "manifest.tags") && (flg & MFESTFLG_TAGS) ){ eflg |= MFESTFLG_TAGS; } if( eflg & (MFESTFLG_RAW|MFESTFLG_UUID) ){ if( eflg & MFESTFLG_RAW ){ blob_append(&filename, "manifest", -1); zName = blob_str(&filename); } if( eflg & MFESTFLG_UUID ){ sha1sum_blob(&mfile, &hash); } if( eflg & MFESTFLG_RAW ) { sterilize_manifest(&mfile); tar_add_file(zName, &mfile, 0, mTime); } } blob_reset(&mfile); if( eflg & MFESTFLG_UUID ){ blob_append(&hash, "\n", 1); blob_resize(&filename, nPrefix); blob_append(&filename, "manifest.uuid", -1); zName = blob_str(&filename); tar_add_file(zName, &hash, 0, mTime); blob_reset(&hash); } if( eflg & MFESTFLG_TAGS ){ Blob tagslist; blob_zero(&tagslist); get_checkin_taglist(rid, &tagslist); blob_resize(&filename, nPrefix); blob_append(&filename, "manifest.tags", -1); zName = blob_str(&filename); tar_add_file(zName, &tagslist, 0, mTime); blob_reset(&tagslist); } } manifest_file_rewind(pManifest); while( (pFile = manifest_file_next(pManifest,0))!=0 ){ int fid; if( pInclude!=0 && !glob_match(pInclude, pFile->zName) ) continue; if( glob_match(pExclude, pFile->zName) ) continue; fid = uuid_to_rid(pFile->zUuid, 0); if( fid ){ content_get(fid, &file); blob_resize(&filename, nPrefix); blob_append(&filename, pFile->zName, -1); zName = blob_str(&filename); tar_add_file(zName, &file, manifest_file_mperm(pFile), mTime); blob_reset(&file); |
︙ | ︙ | |||
529 530 531 532 533 534 535 | blob_reset(&filename); tar_finish(pTar); } /* ** COMMAND: tarball* ** | | | > > > > > > > | | > > > > > > > > | 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 | blob_reset(&filename); tar_finish(pTar); } /* ** COMMAND: tarball* ** ** Usage: %fossil tarball VERSION OUTPUTFILE [OPTIONS] ** ** Generate a compressed tarball for a specified version. If the --name ** option is used, its argument becomes the name of the top-level directory ** in the resulting tarball. If --name is omitted, the top-level directory ** name is derived from the project name, the check-in date and time, and ** the artifact ID of the check-in. ** ** The GLOBLIST argument to --exclude and --include can be a comma-separated ** list of glob patterns, where each glob pattern may optionally be enclosed ** in "..." or '...' so that it may contain commas. If a file matches both ** --include and --exclude then it is excluded. ** ** Options: ** -X|--exclude GLOBLIST Comma-separated list of GLOBs of files to exclude ** --include GLOBLIST Comma-separated list of GLOBs of files to include ** --name DIRECTORYNAME The name of the top-level directory in the archive ** -R REPOSITORY Specify a Fossil repository */ void tarball_cmd(void){ int rid; Blob tarball; const char *zName; Glob *pInclude = 0; Glob *pExclude = 0; const char *zInclude; const char *zExclude; zName = find_option("name", 0, 1); zExclude = find_option("exclude", "X", 1); if( zExclude ) pExclude = glob_create(zExclude); zInclude = find_option("include", 0, 1); if( zInclude ) pInclude = glob_create(zInclude); db_find_and_open_repository(0, 0); /* We should be done with options.. */ verify_all_options(); if( g.argc!=4 ){ usage("VERSION OUTPUTFILE"); |
︙ | ︙ | |||
571 572 573 574 575 576 577 | " || substr(blob.uuid, 1, 10)" " FROM event, blob" " WHERE event.objid=%d" " AND blob.rid=%d", db_get("project-name", "unnamed"), rid, rid ); } | | > > | | | > | | > | | < < > | > > > > > > > > > > > > | > > > > | > > > > > > > | | | > > > > > > | | | | > | > | 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 | " || substr(blob.uuid, 1, 10)" " FROM event, blob" " WHERE event.objid=%d" " AND blob.rid=%d", db_get("project-name", "unnamed"), rid, rid ); } tarball_of_checkin(rid, &tarball, zName, pInclude, pExclude); glob_free(pInclude); glob_free(pExclude); blob_write_to_file(&tarball, g.argv[3]); blob_reset(&tarball); } /* ** WEBPAGE: tarball ** URL: /tarball ** ** Generate a compressed tarball for the check-in specified by the "uuid" ** query parameter. Return that compressed tarball as the HTTP reply ** content. ** ** Query parameters: ** ** name=NAME[.tar.gz] The base name of the output file. The default ** value is a configuration parameter in the project ** settings. A prefix of the name, omitting the ** extension, is used as the top-most directory name. ** ** uuid=TAG The check-in that is turned into a compressed tarball. ** Defaults to "trunk". ** ** in=PATTERN Only include files that match the comma-separate ** list of GLOB patterns in PATTERN, as with ex= ** ** ex=PATTERN Omit any file that match PATTERN. PATTERN is a ** comma-separated list of GLOB patterns, where each ** pattern can optionally be quoted using ".." or '..'. ** Any file matching both ex= and in= is excluded. */ void tarball_page(void){ int rid; char *zName, *zRid, *zKey; int nName, nRid; const char *zInclude; /* The in= query parameter */ const char *zExclude; /* The ex= query parameter */ Blob cacheKey; /* The key to cache */ Glob *pInclude = 0; /* The compiled in= glob pattern */ Glob *pExclude = 0; /* The compiled ex= glob pattern */ Blob tarball; /* Tarball accumulated here */ login_check_credentials(); if( !g.perm.Zip ){ login_needed(g.anon.Zip); return; } load_control(); zName = mprintf("%s", PD("name","")); nName = strlen(zName); zRid = mprintf("%s", PD("uuid","trunk")); nRid = strlen(zRid); zInclude = P("in"); if( zInclude ) pInclude = glob_create(zInclude); zExclude = P("ex"); if( zExclude ) pExclude = glob_create(zExclude); if( nName>7 && fossil_strcmp(&zName[nName-7], ".tar.gz")==0 ){ /* Special case: Remove the ".tar.gz" suffix. */ nName -= 7; zName[nName] = 0; }else{ /* If the file suffix is not ".tar.gz" then just remove the ** suffix up to and including the last "." */ for(nName=strlen(zName)-1; nName>5; nName--){ if( zName[nName]=='.' ){ zName[nName] = 0; break; } } } rid = name_to_typed_rid(nRid?zRid:zName, "ci"); if( rid==0 ){ @ Not found return; } if( nRid==0 && nName>10 ) zName[10] = 0; /* Compute a unique key for the cache entry based on query parameters */ blob_init(&cacheKey, 0, 0); blob_appendf(&cacheKey, "/tarball/%z", rid_to_uuid(rid)); blob_appendf(&cacheKey, "/%q", zName); if( zInclude ) blob_appendf(&cacheKey, ",in=%Q", zInclude); if( zExclude ) blob_appendf(&cacheKey, ",ex=%Q", zExclude); zKey = blob_str(&cacheKey); if( P("debug")!=0 ){ style_header("Tarball Generator Debug Screen"); @ zName = "%h(zName)"<br /> @ rid = %d(rid)<br /> if( zInclude ){ @ zInclude = "%h(zInclude)"<br /> } if( zExclude ){ @ zExclude = "%h(zExclude)"<br /> } @ zKey = "%h(zKey)" style_footer(); return; } if( referred_from_login() ){ style_header("Tarball Download"); @ <form action='%R/tarball/%h(zName).tar.gz'> cgi_query_parameters_to_hidden(); @ <p>Tarball named <b>%h(zName).tar.gz</b> holding the content @ of check-in <b>%h(zRid)</b>: @ <input type="submit" value="Download" /> @ </form> style_footer(); return; } blob_zero(&tarball); if( cache_read(&tarball, zKey)==0 ){ tarball_of_checkin(rid, &tarball, zName, pInclude, pExclude); cache_write(&tarball, zKey); } glob_free(pInclude); glob_free(pExclude); fossil_free(zName); fossil_free(zRid); blob_reset(&cacheKey); cgi_set_content(&tarball); cgi_set_content_type("application/x-compressed"); } |
Changes to src/th_main.c.
︙ | ︙ | |||
280 281 282 283 284 285 286 | ){ int rc; if( argc<2 || argc>3 ){ return Th_WrongNumArgs(interp, "enable_output [LABEL] BOOLEAN"); } rc = Th_ToInt(interp, argv[argc-1], argl[argc-1], &enableOutput); if( g.thTrace ){ | | | 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 | ){ int rc; if( argc<2 || argc>3 ){ return Th_WrongNumArgs(interp, "enable_output [LABEL] BOOLEAN"); } rc = Th_ToInt(interp, argv[argc-1], argl[argc-1], &enableOutput); if( g.thTrace ){ Th_Trace("enable_output {%.*s} -> %d<br />\n", argl[1],argv[1],enableOutput); } return rc; } /* ** Returns a name for a TH1 return code. */ |
︙ | ︙ | |||
330 331 332 333 334 335 336 | } } static void sendError(const char *z, int n, int forceCgi){ int savedEnable = enableOutput; enableOutput = 1; if( forceCgi || g.cgiOutput ){ | | | 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 | } } static void sendError(const char *z, int n, int forceCgi){ int savedEnable = enableOutput; enableOutput = 1; if( forceCgi || g.cgiOutput ){ sendText("<hr /><p class=\"thmainError\">", -1, 0); } sendText("ERROR: ", -1, 0); sendText((char*)z, n, 1); sendText(forceCgi || g.cgiOutput ? "</p>" : "\n", -1, 0); enableOutput = savedEnable; } |
︙ | ︙ | |||
634 635 636 637 638 639 640 | static int hascapCmd( Th_Interp *interp, void *p, int argc, const char **argv, int *argl ){ | | > > | > > > | > | 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 | static int hascapCmd( Th_Interp *interp, void *p, int argc, const char **argv, int *argl ){ int rc = 1, i; char *zCapList = 0; int nCapList = 0; if( argc<2 ){ return Th_WrongNumArgs(interp, "hascap STRING ..."); } for(i=1; rc==1 && i<argc; i++){ if( g.thTrace ){ Th_ListAppend(interp, &zCapList, &nCapList, argv[i], argl[i]); } rc = login_has_capability((char*)argv[i],argl[i],*(int*)p); } if( g.thTrace ){ Th_Trace("[%s %#h] => %d<br />\n", argv[0], nCapList, zCapList, rc); Th_Free(interp, zCapList); } Th_SetResultInt(interp, rc); return TH_OK; } /* ** TH1 command: searchable STRING... |
︙ | ︙ | |||
876 877 878 879 880 881 882 | if( argc!=2 ){ return Th_WrongNumArgs(interp, "anycap STRING"); } for(i=0; rc==0 && i<argl[1]; i++){ rc = login_has_capability((char*)&argv[1][i],1,0); } if( g.thTrace ){ | | | 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 | if( argc!=2 ){ return Th_WrongNumArgs(interp, "anycap STRING"); } for(i=0; rc==0 && i<argl[1]; i++){ rc = login_has_capability((char*)&argv[1][i],1,0); } if( g.thTrace ){ Th_Trace("[anycap %#h] => %d<br />\n", argl[1], argv[1], rc); } Th_SetResultInt(interp, rc); return TH_OK; } /* ** TH1 command: combobox NAME TEXT-LIST NUMLINES |
︙ | ︙ | |||
1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 | return TH_ERROR; } }else{ Th_SetResult(interp, "repository unavailable", -1); return TH_ERROR; } } #ifdef _WIN32 # include <windows.h> #else # include <sys/time.h> # include <sys/resource.h> #endif | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 | return TH_ERROR; } }else{ Th_SetResult(interp, "repository unavailable", -1); return TH_ERROR; } } /* ** TH1 command: unversioned content FILENAME ** ** Attempts to locate the specified unversioned file and return its contents. ** An error is generated if the repository is not open or the unversioned file ** cannot be found. */ static int unversionedContentCmd( Th_Interp *interp, void *p, int argc, const char **argv, int *argl ){ if( argc!=3 ){ return Th_WrongNumArgs(interp, "unversioned content FILENAME"); } if( Th_IsRepositoryOpen() ){ Blob content; if( unversioned_content(argv[2], &content)==0 ){ Th_SetResult(interp, blob_str(&content), blob_size(&content)); blob_reset(&content); return TH_OK; }else{ return TH_ERROR; } }else{ Th_SetResult(interp, "repository unavailable", -1); return TH_ERROR; } } /* ** TH1 command: unversioned list ** ** Returns a list of the names of all unversioned files held in the local ** repository. An error is generated if the repository is not open. */ static int unversionedListCmd( Th_Interp *interp, void *p, int argc, const char **argv, int *argl ){ if( argc!=2 ){ return Th_WrongNumArgs(interp, "unversioned list"); } if( Th_IsRepositoryOpen() ){ Stmt q; char *zList = 0; int nList = 0; db_prepare(&q, "SELECT name FROM unversioned WHERE hash IS NOT NULL" " ORDER BY name"); while( db_step(&q)==SQLITE_ROW ){ Th_ListAppend(interp, &zList, &nList, db_column_text(&q,0), -1); } db_finalize(&q); Th_SetResult(interp, zList, nList); Th_Free(interp, zList); return TH_OK; }else{ Th_SetResult(interp, "repository unavailable", -1); return TH_ERROR; } } static int unversionedCmd( Th_Interp *interp, void *p, int argc, const char **argv, int *argl ){ static const Th_SubCommand aSub[] = { { "content", unversionedContentCmd }, { "list", unversionedListCmd }, { 0, 0 } }; return Th_CallSubCommand(interp, p, argc, argv, argl, aSub); } #ifdef _WIN32 # include <windows.h> #else # include <sys/time.h> # include <sys/resource.h> #endif |
︙ | ︙ | |||
1426 1427 1428 1429 1430 1431 1432 | sqlite3_randomness(n, aRand); encode16(aRand, zOut, n); Th_SetResult(interp, (const char *)zOut, -1); return TH_OK; } /* | > > > > > > > > > > > > | | 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 | sqlite3_randomness(n, aRand); encode16(aRand, zOut, n); Th_SetResult(interp, (const char *)zOut, -1); return TH_OK; } /* ** Run sqlite3_step() while suppressing error messages sent to the ** rendered webpage or to the console. */ static int ignore_errors_step(sqlite3_stmt *pStmt){ int rc; g.dbIgnoreErrors++; rc = sqlite3_step(pStmt); g.dbIgnoreErrors--; return rc; } /* ** TH1 command: query [-nocomplain] SQL CODE ** ** Run the SQL query given by the SQL argument. For each row in the result ** set, run CODE. ** ** In SQL, parameters such as $var are filled in using the value of variable ** "var". Result values are stored in variables with the column name prior ** to each invocation of CODE. |
︙ | ︙ | |||
1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 | const char *zSql; int nSql; const char *zTail; int n, i; int res = TH_OK; int nVar; char *zErr = 0; if( argc!=3 ){ return Th_WrongNumArgs(interp, "query SQL CODE"); } if( g.db==0 ){ Th_ErrorMessage(interp, "database is not open", 0, 0); return TH_ERROR; } zSql = argv[1]; nSql = argl[1]; while( res==TH_OK && nSql>0 ){ zErr = 0; sqlite3_set_authorizer(g.db, report_query_authorizer, (void*)&zErr); rc = sqlite3_prepare_v2(g.db, argv[1], argl[1], &pStmt, &zTail); sqlite3_set_authorizer(g.db, 0, 0); if( rc!=0 || zErr!=0 ){ Th_ErrorMessage(interp, "SQL error: ", zErr ? zErr : sqlite3_errmsg(g.db), -1); return TH_ERROR; } n = (int)(zTail - zSql); zSql += n; nSql -= n; if( pStmt==0 ) continue; nVar = sqlite3_bind_parameter_count(pStmt); for(i=1; i<=nVar; i++){ const char *zVar = sqlite3_bind_parameter_name(pStmt, i); int szVar = zVar ? th_strlen(zVar) : 0; if( szVar>1 && zVar[0]=='$' && Th_GetVar(interp, zVar+1, szVar-1)==TH_OK ){ int nVal; const char *zVal = Th_GetResult(interp, &nVal); sqlite3_bind_text(pStmt, i, zVal, nVal, SQLITE_TRANSIENT); } } | > > > > > > > > > > > | > | 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 | const char *zSql; int nSql; const char *zTail; int n, i; int res = TH_OK; int nVar; char *zErr = 0; int noComplain = 0; if( argc>3 && argl[1]==11 && strncmp(argv[1], "-nocomplain", 11)==0 ){ argc--; argv++; argl++; noComplain = 1; } if( argc!=3 ){ return Th_WrongNumArgs(interp, "query SQL CODE"); } if( g.db==0 ){ if( noComplain ) return TH_OK; Th_ErrorMessage(interp, "database is not open", 0, 0); return TH_ERROR; } zSql = argv[1]; nSql = argl[1]; while( res==TH_OK && nSql>0 ){ zErr = 0; sqlite3_set_authorizer(g.db, report_query_authorizer, (void*)&zErr); g.dbIgnoreErrors++; rc = sqlite3_prepare_v2(g.db, argv[1], argl[1], &pStmt, &zTail); g.dbIgnoreErrors--; sqlite3_set_authorizer(g.db, 0, 0); if( rc!=0 || zErr!=0 ){ if( noComplain ) return TH_OK; Th_ErrorMessage(interp, "SQL error: ", zErr ? zErr : sqlite3_errmsg(g.db), -1); return TH_ERROR; } n = (int)(zTail - zSql); zSql += n; nSql -= n; if( pStmt==0 ) continue; nVar = sqlite3_bind_parameter_count(pStmt); for(i=1; i<=nVar; i++){ const char *zVar = sqlite3_bind_parameter_name(pStmt, i); int szVar = zVar ? th_strlen(zVar) : 0; if( szVar>1 && zVar[0]=='$' && Th_GetVar(interp, zVar+1, szVar-1)==TH_OK ){ int nVal; const char *zVal = Th_GetResult(interp, &nVal); sqlite3_bind_text(pStmt, i, zVal, nVal, SQLITE_TRANSIENT); } } while( res==TH_OK && ignore_errors_step(pStmt)==SQLITE_ROW ){ int nCol = sqlite3_column_count(pStmt); for(i=0; i<nCol; i++){ const char *zCol = sqlite3_column_name(pStmt, i); int szCol = th_strlen(zCol); const char *zVal = (const char*)sqlite3_column_text(pStmt, i); int szVal = sqlite3_column_bytes(pStmt, i); Th_SetVar(interp, zCol, szCol, zVal, szVal); } res = Th_Eval(interp, 0, argv[2], argl[2]); if( res==TH_BREAK || res==TH_CONTINUE ) res = TH_OK; } rc = sqlite3_finalize(pStmt); if( rc!=SQLITE_OK ){ if( noComplain ) return TH_OK; Th_ErrorMessage(interp, "SQL error: ", sqlite3_errmsg(g.db), -1); return TH_ERROR; } } return res; } |
︙ | ︙ | |||
1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 | {"setParameter", setParameterCmd, 0}, {"setting", settingCmd, 0}, {"styleHeader", styleHeaderCmd, 0}, {"styleFooter", styleFooterCmd, 0}, {"tclReady", tclReadyCmd, 0}, {"trace", traceCmd, 0}, {"stime", stimeCmd, 0}, {"utime", utimeCmd, 0}, {"verifyCsrf", verifyCsrfCmd, 0}, {"wiki", wikiCmd, (void*)&aFlags[0]}, {0, 0, 0} }; if( g.thTrace ){ Th_Trace("th1-init 0x%x => 0x%x<br />\n", g.th1Flags, flags); | > | 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 | {"setParameter", setParameterCmd, 0}, {"setting", settingCmd, 0}, {"styleHeader", styleHeaderCmd, 0}, {"styleFooter", styleFooterCmd, 0}, {"tclReady", tclReadyCmd, 0}, {"trace", traceCmd, 0}, {"stime", stimeCmd, 0}, {"unversioned", unversionedCmd, 0}, {"utime", utimeCmd, 0}, {"verifyCsrf", verifyCsrfCmd, 0}, {"wiki", wikiCmd, (void*)&aFlags[0]}, {0, 0, 0} }; if( g.thTrace ){ Th_Trace("th1-init 0x%x => 0x%x<br />\n", g.th1Flags, flags); |
︙ | ︙ | |||
1890 1891 1892 1893 1894 1895 1896 | db_get_boolean("tcl", 0) ){ if( !g.tcl.setup ){ g.tcl.setup = db_get("tcl-setup", 0); /* Grab Tcl setup script. */ } th_register_tcl(g.interp, &g.tcl); /* Tcl integration commands. */ } #endif | | | 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 | db_get_boolean("tcl", 0) ){ if( !g.tcl.setup ){ g.tcl.setup = db_get("tcl-setup", 0); /* Grab Tcl setup script. */ } th_register_tcl(g.interp, &g.tcl); /* Tcl integration commands. */ } #endif for(i=0; i<count(aCommand); i++){ if ( !aCommand[i].zName || !aCommand[i].xProc ) continue; Th_CreateCommand(g.interp, aCommand[i].zName, aCommand[i].xProc, aCommand[i].pContext, 0); } }else{ wasInit = 1; } |
︙ | ︙ | |||
2102 2103 2104 2105 2106 2107 2108 | ** This function is called by Fossil just prior to dispatching a command. ** Returning a value other than TH_OK from this function (i.e. via an ** evaluated script raising an error or calling [break]/[continue]) will ** cause the actual command execution to be skipped. */ int Th_CommandHook( const char *zName, | | | 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 | ** This function is called by Fossil just prior to dispatching a command. ** Returning a value other than TH_OK from this function (i.e. via an ** evaluated script raising an error or calling [break]/[continue]) will ** cause the actual command execution to be skipped. */ int Th_CommandHook( const char *zName, unsigned int cmdFlags ){ int rc = TH_OK; if( !Th_AreHooksEnabled() ) return rc; Th_FossilInit(TH_INIT_HOOK); Th_Store("cmd_name", zName); Th_StoreList("cmd_args", g.argv, g.argc); Th_StoreInt("cmd_flags", cmdFlags); |
︙ | ︙ | |||
2158 2159 2160 2161 2162 2163 2164 | ** Returning a value other than TH_OK from this function (i.e. via an ** evaluated script raising an error or calling [break]/[continue]) may ** cause an error message to be displayed to the local interactive user. ** Currently, TH1 error messages generated by this function are ignored. */ int Th_CommandNotify( const char *zName, | | | 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 | ** Returning a value other than TH_OK from this function (i.e. via an ** evaluated script raising an error or calling [break]/[continue]) may ** cause an error message to be displayed to the local interactive user. ** Currently, TH1 error messages generated by this function are ignored. */ int Th_CommandNotify( const char *zName, unsigned int cmdFlags ){ int rc = TH_OK; if( !Th_AreHooksEnabled() ) return rc; Th_FossilInit(TH_INIT_HOOK); Th_Store("cmd_name", zName); Th_StoreList("cmd_args", g.argv, g.argc); Th_StoreInt("cmd_flags", cmdFlags); |
︙ | ︙ | |||
2189 2190 2191 2192 2193 2194 2195 | ** This function is called by Fossil just prior to processing a web page. ** Returning a value other than TH_OK from this function (i.e. via an ** evaluated script raising an error or calling [break]/[continue]) will ** cause the actual web page processing to be skipped. */ int Th_WebpageHook( const char *zName, | | | 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 | ** This function is called by Fossil just prior to processing a web page. ** Returning a value other than TH_OK from this function (i.e. via an ** evaluated script raising an error or calling [break]/[continue]) will ** cause the actual web page processing to be skipped. */ int Th_WebpageHook( const char *zName, unsigned int cmdFlags ){ int rc = TH_OK; if( !Th_AreHooksEnabled() ) return rc; Th_FossilInit(TH_INIT_HOOK); Th_Store("web_name", zName); Th_StoreList("web_args", g.argv, g.argc); Th_StoreInt("web_flags", cmdFlags); |
︙ | ︙ | |||
2245 2246 2247 2248 2249 2250 2251 | ** Returning a value other than TH_OK from this function (i.e. via an ** evaluated script raising an error or calling [break]/[continue]) may ** cause an error message to be displayed to the remote user. ** Currently, TH1 error messages generated by this function are ignored. */ int Th_WebpageNotify( const char *zName, | | | 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 | ** Returning a value other than TH_OK from this function (i.e. via an ** evaluated script raising an error or calling [break]/[continue]) may ** cause an error message to be displayed to the remote user. ** Currently, TH1 error messages generated by this function are ignored. */ int Th_WebpageNotify( const char *zName, unsigned int cmdFlags ){ int rc = TH_OK; if( !Th_AreHooksEnabled() ) return rc; Th_FossilInit(TH_INIT_HOOK); Th_Store("web_name", zName); Th_StoreList("web_args", g.argv, g.argc); Th_StoreInt("web_flags", cmdFlags); |
︙ | ︙ | |||
2325 2326 2327 2328 2329 2330 2331 | zResult = (char*)Th_GetResult(g.interp, &n); sendText((char*)zResult, n, encode); }else if( z[i]=='<' && isBeginScriptTag(&z[i]) ){ sendText(z, i, 0); z += i+5; for(i=0; z[i] && (z[i]!='<' || !isEndScriptTag(&z[i])); i++){} if( g.thTrace ){ | | | 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 | zResult = (char*)Th_GetResult(g.interp, &n); sendText((char*)zResult, n, encode); }else if( z[i]=='<' && isBeginScriptTag(&z[i]) ){ sendText(z, i, 0); z += i+5; for(i=0; z[i] && (z[i]!='<' || !isEndScriptTag(&z[i])); i++){} if( g.thTrace ){ Th_Trace("eval {<pre>%#h</pre>}<br />", i, z); } rc = Th_Eval(g.interp, 0, (const char*)z, i); if( rc!=TH_OK ) break; z += i; if( z[0] ){ z += 6; } i = 0; }else{ |
︙ | ︙ | |||
2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 2379 2380 2381 2382 2383 2384 | ** on standard output. ** ** Options: ** ** --cgi Include a CGI response header in the output ** --http Include an HTTP response header in the output ** --open-config Open the configuration database ** --th-trace Trace TH1 execution (for debugging purposes) */ void test_th_render(void){ int forceCgi, fullHttpReply; Blob in; Th_InitTraceLog(); forceCgi = find_option("cgi", 0, 0)!=0; fullHttpReply = find_option("http", 0, 0)!=0; if( fullHttpReply ) forceCgi = 1; if( forceCgi ) Th_ForceCgi(fullHttpReply); if( find_option("open-config", 0, 0)!=0 ){ Th_OpenConfig(1); } verify_all_options(); if( g.argc<3 ){ usage("FILE"); } blob_zero(&in); blob_read_from_file(&in, g.argv[2]); | > > > > > > > > > > > > | 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 | ** on standard output. ** ** Options: ** ** --cgi Include a CGI response header in the output ** --http Include an HTTP response header in the output ** --open-config Open the configuration database ** --set-anon-caps Set anonymous login capabilities ** --set-user-caps Set user login capabilities ** --th-trace Trace TH1 execution (for debugging purposes) */ void test_th_render(void){ int forceCgi, fullHttpReply; Blob in; Th_InitTraceLog(); forceCgi = find_option("cgi", 0, 0)!=0; fullHttpReply = find_option("http", 0, 0)!=0; if( fullHttpReply ) forceCgi = 1; if( forceCgi ) Th_ForceCgi(fullHttpReply); if( find_option("open-config", 0, 0)!=0 ){ Th_OpenConfig(1); } if( find_option("set-anon-caps", 0, 0)!=0 ){ const char *zCap = fossil_getenv("TH1_TEST_ANON_CAPS"); login_set_capabilities(zCap ? zCap : "sx", LOGIN_ANON); g.useLocalauth = 1; } if( find_option("set-user-caps", 0, 0)!=0 ){ const char *zCap = fossil_getenv("TH1_TEST_USER_CAPS"); login_set_capabilities(zCap ? zCap : "sx", 0); g.useLocalauth = 1; } verify_all_options(); if( g.argc<3 ){ usage("FILE"); } blob_zero(&in); blob_read_from_file(&in, g.argv[2]); |
︙ | ︙ | |||
2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 | ** script and show the results on standard output. ** ** Options: ** ** --cgi Include a CGI response header in the output ** --http Include an HTTP response header in the output ** --open-config Open the configuration database ** --th-trace Trace TH1 execution (for debugging purposes) */ void test_th_eval(void){ int rc; const char *zRc; int forceCgi, fullHttpReply; Th_InitTraceLog(); forceCgi = find_option("cgi", 0, 0)!=0; fullHttpReply = find_option("http", 0, 0)!=0; if( fullHttpReply ) forceCgi = 1; if( forceCgi ) Th_ForceCgi(fullHttpReply); if( find_option("open-config", 0, 0)!=0 ){ Th_OpenConfig(1); } verify_all_options(); if( g.argc!=3 ){ usage("script"); } Th_FossilInit(TH_INIT_DEFAULT); rc = Th_Eval(g.interp, 0, g.argv[2], -1); | > > > > > > > > > > > > | 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 | ** script and show the results on standard output. ** ** Options: ** ** --cgi Include a CGI response header in the output ** --http Include an HTTP response header in the output ** --open-config Open the configuration database ** --set-anon-caps Set anonymous login capabilities ** --set-user-caps Set user login capabilities ** --th-trace Trace TH1 execution (for debugging purposes) */ void test_th_eval(void){ int rc; const char *zRc; int forceCgi, fullHttpReply; Th_InitTraceLog(); forceCgi = find_option("cgi", 0, 0)!=0; fullHttpReply = find_option("http", 0, 0)!=0; if( fullHttpReply ) forceCgi = 1; if( forceCgi ) Th_ForceCgi(fullHttpReply); if( find_option("open-config", 0, 0)!=0 ){ Th_OpenConfig(1); } if( find_option("set-anon-caps", 0, 0)!=0 ){ const char *zCap = fossil_getenv("TH1_TEST_ANON_CAPS"); login_set_capabilities(zCap ? zCap : "sx", LOGIN_ANON); g.useLocalauth = 1; } if( find_option("set-user-caps", 0, 0)!=0 ){ const char *zCap = fossil_getenv("TH1_TEST_USER_CAPS"); login_set_capabilities(zCap ? zCap : "sx", 0); g.useLocalauth = 1; } verify_all_options(); if( g.argc!=3 ){ usage("script"); } Th_FossilInit(TH_INIT_DEFAULT); rc = Th_Eval(g.interp, 0, g.argv[2], -1); |
︙ | ︙ | |||
2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 | ** output. ** ** Options: ** ** --cgi Include a CGI response header in the output ** --http Include an HTTP response header in the output ** --open-config Open the configuration database ** --th-trace Trace TH1 execution (for debugging purposes) */ void test_th_source(void){ int rc; const char *zRc; int forceCgi, fullHttpReply; Blob in; Th_InitTraceLog(); forceCgi = find_option("cgi", 0, 0)!=0; fullHttpReply = find_option("http", 0, 0)!=0; if( fullHttpReply ) forceCgi = 1; if( forceCgi ) Th_ForceCgi(fullHttpReply); if( find_option("open-config", 0, 0)!=0 ){ Th_OpenConfig(1); } verify_all_options(); if( g.argc!=3 ){ usage("file"); } blob_zero(&in); blob_read_from_file(&in, g.argv[2]); | > > > > > > > > > > > > | 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 2609 2610 2611 2612 | ** output. ** ** Options: ** ** --cgi Include a CGI response header in the output ** --http Include an HTTP response header in the output ** --open-config Open the configuration database ** --set-anon-caps Set anonymous login capabilities ** --set-user-caps Set user login capabilities ** --th-trace Trace TH1 execution (for debugging purposes) */ void test_th_source(void){ int rc; const char *zRc; int forceCgi, fullHttpReply; Blob in; Th_InitTraceLog(); forceCgi = find_option("cgi", 0, 0)!=0; fullHttpReply = find_option("http", 0, 0)!=0; if( fullHttpReply ) forceCgi = 1; if( forceCgi ) Th_ForceCgi(fullHttpReply); if( find_option("open-config", 0, 0)!=0 ){ Th_OpenConfig(1); } if( find_option("set-anon-caps", 0, 0)!=0 ){ const char *zCap = fossil_getenv("TH1_TEST_ANON_CAPS"); login_set_capabilities(zCap ? zCap : "sx", LOGIN_ANON); g.useLocalauth = 1; } if( find_option("set-user-caps", 0, 0)!=0 ){ const char *zCap = fossil_getenv("TH1_TEST_USER_CAPS"); login_set_capabilities(zCap ? zCap : "sx", 0); g.useLocalauth = 1; } verify_all_options(); if( g.argc!=3 ){ usage("file"); } blob_zero(&in); blob_read_from_file(&in, g.argv[2]); |
︙ | ︙ | |||
2517 2518 2519 2520 2521 2522 2523 | if( fullHttpReply ) forceCgi = 1; if( forceCgi ) Th_ForceCgi(fullHttpReply); verify_all_options(); if( g.argc<5 ){ usage("TYPE NAME FLAGS"); } if( fossil_stricmp(g.argv[2], "cmdhook")==0 ){ | | | | | | | 2666 2667 2668 2669 2670 2671 2672 2673 2674 2675 2676 2677 2678 2679 2680 2681 2682 2683 2684 2685 2686 2687 2688 | if( fullHttpReply ) forceCgi = 1; if( forceCgi ) Th_ForceCgi(fullHttpReply); verify_all_options(); if( g.argc<5 ){ usage("TYPE NAME FLAGS"); } if( fossil_stricmp(g.argv[2], "cmdhook")==0 ){ rc = Th_CommandHook(g.argv[3], (unsigned int)atoi(g.argv[4])); }else if( fossil_stricmp(g.argv[2], "cmdnotify")==0 ){ rc = Th_CommandNotify(g.argv[3], (unsigned int)atoi(g.argv[4])); }else if( fossil_stricmp(g.argv[2], "webhook")==0 ){ rc = Th_WebpageHook(g.argv[3], (unsigned int)atoi(g.argv[4])); }else if( fossil_stricmp(g.argv[2], "webnotify")==0 ){ rc = Th_WebpageNotify(g.argv[3], (unsigned int)atoi(g.argv[4])); }else{ fossil_fatal("Unknown TH1 hook %s", g.argv[2]); } if( g.interp ){ zResult = (char*)Th_GetResult(g.interp, &nResult); } sendText("RESULT (", -1, 0); sendText(Th_ReturnCodeName(rc, 0), -1, 0); sendText(")", -1, 0); |
︙ | ︙ |
Changes to src/th_tcl.c.
︙ | ︙ | |||
831 832 833 834 835 836 837 | Tcl_Interp *interp ){ int i; Th_Interp *th1Interp = (Th_Interp *)clientData; if( !th1Interp ) return; /* Remove the Tcl integration commands. */ | | | 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 | Tcl_Interp *interp ){ int i; Th_Interp *th1Interp = (Th_Interp *)clientData; if( !th1Interp ) return; /* Remove the Tcl integration commands. */ for(i=0; i<count(aCommand); i++){ Th_RenameCommand(th1Interp, aCommand[i].zName, -1, NULL, 0); } } /* ** When Tcl stubs support is enabled, attempts to dynamically load the Tcl ** shared library and fetch the function pointers necessary to create an |
︙ | ︙ | |||
1259 1260 1261 1262 1263 1264 1265 | int th_register_tcl( Th_Interp *interp, void *pContext ){ int i; /* Add the Tcl integration commands to TH1. */ | | | 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 | int th_register_tcl( Th_Interp *interp, void *pContext ){ int i; /* Add the Tcl integration commands to TH1. */ for(i=0; i<count(aCommand); i++){ void *ctx; if( !aCommand[i].zName || !aCommand[i].xProc ) continue; ctx = aCommand[i].pContext; /* Use Tcl interpreter for context? */ if( !ctx ) ctx = pContext; Th_CreateCommand(interp, aCommand[i].zName, aCommand[i].xProc, ctx, 0); } return TH_OK; } #endif /* FOSSIL_ENABLE_TCL */ |
Changes to src/timeline.c.
︙ | ︙ | |||
24 25 26 27 28 29 30 31 32 33 34 35 36 37 | #include "timeline.h" /* ** The value of one second in julianday notation */ #define ONE_SECOND (1.0/86400.0) /* ** Add an appropriate tag to the output if "rid" is unpublished (private) */ #define UNPUB_TAG "<em>(unpublished)</em>" void tag_private_status(int rid){ if( content_is_private(rid) ){ cgi_printf("%s", UNPUB_TAG); | > > > > > > > > > | 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 | #include "timeline.h" /* ** The value of one second in julianday notation */ #define ONE_SECOND (1.0/86400.0) /* ** timeline mode options */ #define TIMELINE_MODE_NONE 0 #define TIMELINE_MODE_BEFORE 1 #define TIMELINE_MODE_AFTER 2 #define TIMELINE_MODE_CHILDREN 3 #define TIMELINE_MODE_PARENTS 4 /* ** Add an appropriate tag to the output if "rid" is unpublished (private) */ #define UNPUB_TAG "<em>(unpublished)</em>" void tag_private_status(int rid){ if( content_is_private(rid) ){ cgi_printf("%s", UNPUB_TAG); |
︙ | ︙ | |||
176 177 178 179 180 181 182 | @ %h(zBr) - %s(hash_color(zBr)) - @ Omnes nos quasi oves erravimus unusquisque in viam @ suam declinavit.</p> cnt++; } } if( cnt ){ | | | 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 | @ %h(zBr) - %s(hash_color(zBr)) - @ Omnes nos quasi oves erravimus unusquisque in viam @ suam declinavit.</p> cnt++; } } if( cnt ){ @ <hr /> } @ <form method="post" action="%s(g.zTop)/hash-color-test"> @ <p>Enter candidate branch names below and see them displayed in their @ default background colors above.</p> for(i=0; i<10; i++){ sqlite3_snprintf(sizeof(zNm),zNm,"b%d",i); zBr = P(zNm); |
︙ | ︙ | |||
270 271 272 273 274 275 276 | int modPending; /* Pending moderation */ char *zDateLink; /* URL for the link on the timestamp */ char zTime[20]; if( zDate==0 ){ zDate = "YYYY-MM-DD HH:MM:SS"; /* Something wrong with the repo */ } | | | 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 | int modPending; /* Pending moderation */ char *zDateLink; /* URL for the link on the timestamp */ char zTime[20]; if( zDate==0 ){ zDate = "YYYY-MM-DD HH:MM:SS"; /* Something wrong with the repo */ } modPending = moderation_pending(rid); if( tagid ){ if( modPending ) tagid = -tagid; if( tagid==prevTagid ){ if( tmFlags & TIMELINE_BRIEF ){ suppressCnt++; continue; }else{ |
︙ | ︙ | |||
297 298 299 300 301 302 303 | if( pendingEndTr>1 ){ @ <tr class="timelineSpacer"></tr> } pendingEndTr = 0; } if( fossil_strcmp(zType,"div")==0 ){ if( !prevWasDivider ){ | | | 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 | if( pendingEndTr>1 ){ @ <tr class="timelineSpacer"></tr> } pendingEndTr = 0; } if( fossil_strcmp(zType,"div")==0 ){ if( !prevWasDivider ){ @ <tr><td colspan="3"><hr class="timelineMarker" /></td></tr> } prevWasDivider = 1; continue; } prevWasDivider = 0; /* Date format codes: ** (0) HH:MM |
︙ | ︙ | |||
379 380 381 382 383 384 385 | static Stmt qparent; db_static_prepare(&qparent, "SELECT pid FROM plink" " WHERE cid=:rid AND pid NOT IN phantom" " ORDER BY isprim DESC /*sort*/" ); db_bind_int(&qparent, ":rid", rid); | | | 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 | static Stmt qparent; db_static_prepare(&qparent, "SELECT pid FROM plink" " WHERE cid=:rid AND pid NOT IN phantom" " ORDER BY isprim DESC /*sort*/" ); db_bind_int(&qparent, ":rid", rid); while( db_step(&qparent)==SQLITE_ROW && nParent<count(aParent) ){ aParent[nParent++] = db_column_int(&qparent, 0); } db_reset(&qparent); gidx = graph_add_row(pGraph, rid, nParent, aParent, zBr, zBgClr, zUuid, isLeaf); db_reset(&qbranch); @ <div id="m%d(gidx)" class="tl-nodemark"></div> |
︙ | ︙ | |||
562 563 564 565 566 567 568 | if( !isNew && !isDel && zOldName!=0 ){ @ <li> %h(zOldName) → %h(zFilename)%s(zId) } continue; } zA = href("%R/artifact/%!S",fid?zNew:zOld); if( content_is_private(fid) ){ | | | 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 | if( !isNew && !isDel && zOldName!=0 ){ @ <li> %h(zOldName) → %h(zFilename)%s(zId) } continue; } zA = href("%R/artifact/%!S",fid?zNew:zOld); if( content_is_private(fid) ){ zUnpub = UNPUB_TAG; } if( isNew ){ @ <li> %s(zA)%h(zFilename)</a>%s(zId) %s(zUnpub) if( isMergeNew ){ @ (added by merge) }else{ @ (new file) |
︙ | ︙ | |||
726 727 728 729 730 731 732 | pRow->iRail, /* r */ pRow->bDescender, /* d */ pRow->mergeOut, /* mo */ pRow->mergeUpto, /* mu */ pRow->aiRiser[pRow->iRail], /* u */ pRow->isLeaf ? 1 : 0 /* f */ ); | | | | 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 | pRow->iRail, /* r */ pRow->bDescender, /* d */ pRow->mergeOut, /* mo */ pRow->mergeUpto, /* mu */ pRow->aiRiser[pRow->iRail], /* u */ pRow->isLeaf ? 1 : 0 /* f */ ); /* au */ cSep = '['; for(i=0; i<GR_MAX_RAIL; i++){ if( i==pRow->iRail ) continue; if( pRow->aiRiser[i]>0 ){ cgi_printf("%c%d,%d", cSep, i, pRow->aiRiser[i]); cSep = ','; } } if( cSep=='[' ) cgi_printf("["); cgi_printf("],"); if( colorGraph && pRow->zBgClr[0]=='#' ){ cgi_printf("fg:\"%s\",", bg_to_fg(pRow->zBgClr)); } /* mi */ cgi_printf("mi:"); cSep = '['; for(i=0; i<GR_MAX_RAIL; i++){ if( pRow->mergeIn[i] ){ int mi = i; if( (pRow->mergeDown >> i) & 1 ) mi = -mi; cgi_printf("%c%d", cSep, mi); cSep = ','; } } if( cSep=='[' ) cgi_printf("["); cgi_printf("],h:\"%!S\"}%s", pRow->zUuid, pRow->pNext ? ",\n" : "];\n"); } |
︙ | ︙ | |||
1072 1073 1074 1075 1076 1077 1078 | static void timeline_submenu( HQuery *pUrl, /* Base URL */ const char *zMenuName, /* Submenu name */ const char *zParam, /* Parameter value to add or change */ const char *zValue, /* Value of the new parameter */ const char *zRemove /* Parameter to omit */ ){ | | | 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 | static void timeline_submenu( HQuery *pUrl, /* Base URL */ const char *zMenuName, /* Submenu name */ const char *zParam, /* Parameter value to add or change */ const char *zValue, /* Value of the new parameter */ const char *zRemove /* Parameter to omit */ ){ style_submenu_element(zMenuName, "%s", url_render(pUrl, zParam, zValue, zRemove, 0)); } /* ** Convert a symbolic name used as an argument to the a=, b=, or c= ** query parameters of timeline into a julianday mtime value. |
︙ | ︙ | |||
1175 1176 1177 1178 1179 1180 1181 | az[i++] = "t"; az[i++] = "Tickets"; } if( g.perm.RdWiki ){ az[i++] = "w"; az[i++] = "Wiki"; } | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | | | | | | | | | > > > | < | | | | | | > > | > | > > < < < | > > > > > > > < | 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 | az[i++] = "t"; az[i++] = "Tickets"; } if( g.perm.RdWiki ){ az[i++] = "w"; az[i++] = "Wiki"; } assert( i<=count(az) ); } if( i>2 ){ style_submenu_multichoice("y", i/2, az, isDisabled); } } /* ** If the zChng string is not NULL, then it should be a comma-separated ** list of glob patterns for filenames. Add an term to the WHERE clause ** for the SQL statement under construction that excludes any check-in that ** does not modify one or more files matching the globs. */ static void addFileGlobExclusion( const char *zChng, /* The filename GLOB list */ Blob *pSql /* The SELECT statement under construction */ ){ if( zChng==0 || zChng[0]==0 ) return; blob_append_sql(pSql," AND event.objid IN (" "SELECT mlink.mid FROM mlink, filename" " WHERE mlink.fnid=filename.fnid AND %s)", glob_expr("filename.name", zChng)); } static void addFileGlobDescription( const char *zChng, /* The filename GLOB list */ Blob *pDescription /* Result description */ ){ if( zChng==0 || zChng[0]==0 ) return; blob_appendf(pDescription, " that include changes to files matching %Q", zChng); } /* ** Tag match expression type code. */ typedef enum { MS_EXACT, /* Matches a single tag by exact string comparison. */ MS_GLOB, /* Matches tags against a list of GLOB patterns. */ MS_LIKE, /* Matches tags against a list of LIKE patterns. */ MS_REGEXP /* Matches tags against a list of regular expressions. */ } MatchStyle; /* ** Quote a tag string by surrounding it with double quotes and preceding ** internal double quotes and backslashes with backslashes. */ static const char *tagQuote( int len, /* Maximum length of zTag, or negative for unlimited */ const char *zTag /* Tag string */ ){ Blob blob = BLOB_INITIALIZER; int i, j; blob_zero(&blob); blob_append(&blob, "\"", 1); for( i=j=0; zTag[j] && (len<0 || j<len); ++j ){ if( zTag[j]=='\"' || zTag[j]=='\\' ){ if( j>i ){ blob_append(&blob, zTag+i, j-i); } blob_append(&blob, "\\", 1); i = j; } } if( j>i ){ blob_append(&blob, zTag+i, j-i); } blob_append(&blob, "\"", 1); return blob_str(&blob); } /* ** Construct the tag match SQL expression. ** ** This function is adapted from glob_expr() to support the MS_EXACT, MS_GLOB, ** MS_LIKE, and MS_REGEXP match styles. For MS_EXACT, the returned expression ** checks for integer match against the tag ID which is looked up directly by ** this function. For the other modes, the returned SQL expression performs ** string comparisons against the tag names, so it is necessary to join against ** the tag table to access the "tagname" column. ** ** Each pattern is adjusted to to start with "sym-" and be anchored at end. ** ** In MS_REGEXP mode, backslash can be used to protect delimiter characters. ** The backslashes are not removed from the regular expression. ** ** In addition to assembling and returning an SQL expression, this function ** makes an English-language description of the patterns being matched, suitable ** for display in the web interface. ** ** If any errors arise during processing, *zError is set to an error message. ** Otherwise it is set to NULL. */ static const char *tagMatchExpression( MatchStyle matchStyle, /* Match style code */ const char *zTag, /* Tag name, match pattern, or pattern list */ const char **zDesc, /* Output expression description string */ const char **zError /* Output error string */ ){ Blob expr = BLOB_INITIALIZER; /* SQL expression string assembly buffer */ Blob desc = BLOB_INITIALIZER; /* English description of match patterns */ Blob err = BLOB_INITIALIZER; /* Error text assembly buffer */ const char *zStart; /* Text at start of expression */ const char *zDelimiter; /* Text between expression terms */ const char *zEnd; /* Text at end of expression */ const char *zPrefix; /* Text before each match pattern */ const char *zSuffix; /* Text after each match pattern */ const char *zIntro; /* Text introducing pattern description */ const char *zPattern = 0; /* Previous quoted pattern */ const char *zFail = 0; /* Current failure message or NULL if okay */ const char *zOr = " or "; /* Text before final quoted pattern */ char cDel; /* Input delimiter character */ int i; /* Input match pattern length counter */ /* Optimize exact matches by looking up the ID in advance to create a simple * numeric comparison. Bypass the remainder of this function. */ if( matchStyle==MS_EXACT ){ *zDesc = tagQuote(-1, zTag); return mprintf("(tagid=%d)", db_int(-1, "SELECT tagid FROM tag WHERE tagname='sym-%q'", zTag)); } /* Decide pattern prefix and suffix strings according to match style. */ if( matchStyle==MS_GLOB ){ zStart = "("; zDelimiter = " OR "; zEnd = ")"; zPrefix = "tagname GLOB 'sym-"; zSuffix = "'"; zIntro = "glob pattern "; }else if( matchStyle==MS_LIKE ){ zStart = "("; zDelimiter = " OR "; zEnd = ")"; zPrefix = "tagname LIKE 'sym-"; zSuffix = "'"; zIntro = "SQL LIKE pattern "; }else/* if( matchStyle==MS_REGEXP )*/{ zStart = "(tagname REGEXP '^sym-("; zDelimiter = "|"; zEnd = ")$')"; zPrefix = ""; zSuffix = ""; zIntro = "regular expression "; } /* Convert the list of matches into an SQL expression and text description. */ blob_zero(&expr); blob_zero(&desc); blob_zero(&err); while( 1 ){ /* Skip leading delimiters. */ for( ; fossil_isspace(*zTag) || *zTag==','; ++zTag ); /* Next non-delimiter character determines quoting. */ if( !*zTag ){ /* Terminate loop at end of string. */ break; }else if( *zTag=='\'' || *zTag=='"' ){ /* If word is quoted, prepare to stop at end quote. */ cDel = *zTag; ++zTag; }else{ /* If word is not quoted, prepare to stop at delimiter. */ cDel = ','; } /* Find the next delimiter character or end of string. */ for( i=0; zTag[i] && zTag[i]!=cDel; ++i ){ /* If delimiter is comma, also recognize spaces as delimiters. */ if( cDel==',' && fossil_isspace(zTag[i]) ){ break; } /* In regexp mode, ignore delimiters following backslashes. */ if( matchStyle==MS_REGEXP && zTag[i]=='\\' && zTag[i+1] ){ ++i; } } /* Check for regular expression syntax errors. */ if( matchStyle==MS_REGEXP ){ ReCompiled *regexp; char *zTagDup = fossil_strndup(zTag, i); zFail = re_compile(®exp, zTagDup, 0); re_free(regexp); fossil_free(zTagDup); } /* Process success and error results. */ if( !zFail ){ /* Incorporate the match word into the output expression. %q is used to * protect against SQL injection attacks by replacing ' with ''. */ blob_appendf(&expr, "%s%s%#q%s", blob_size(&expr) ? zDelimiter : zStart, zPrefix, i, zTag, zSuffix); /* Build up the description string. */ if( !blob_size(&desc) ){ /* First tag: start with intro followed by first quoted tag. */ blob_append(&desc, zIntro, -1); blob_append(&desc, tagQuote(i, zTag), -1); }else{ if( zPattern ){ /* Third and subsequent tags: append comma then previous tag. */ blob_append(&desc, ", ", 2); blob_append(&desc, zPattern, -1); zOr = ", or "; } /* Second and subsequent tags: store quoted tag for next iteration. */ zPattern = tagQuote(i, zTag); } }else{ /* On error, skip the match word and build up the error message buffer. */ if( !blob_size(&err) ){ blob_append(&err, "Error: ", 7); }else{ blob_append(&err, ", ", 2); } blob_appendf(&err, "(%s%s: %s)", zIntro, tagQuote(i, zTag), zFail); } /* Advance past all consumed input characters. */ zTag += i; if( cDel!=',' && *zTag==cDel ){ ++zTag; } } /* Finalize and extract the pattern description. */ if( zPattern ){ blob_append(&desc, zOr, -1); blob_append(&desc, zPattern, -1); } *zDesc = blob_str(&desc); /* Finalize and extract the error text. */ *zError = blob_size(&err) ? blob_str(&err) : 0; /* Finalize and extract the SQL expression. */ if( blob_size(&expr) ){ blob_append(&expr, zEnd, -1); return blob_str(&expr); } /* If execution reaches this point, the pattern was empty. Return NULL. */ return 0; } /* ** WEBPAGE: timeline ** ** Query parameters: ** ** a=TIMEORTAG After this event ** b=TIMEORTAG Before this event ** c=TIMEORTAG "Circa" this event ** m=TIMEORTAG Mark this event ** n=COUNT Suggested number of events in output ** p=CHECKIN Parents and ancestors of CHECKIN ** d=CHECKIN Descendants of CHECIN ** dp=CHECKIN The same as d=CHECKIN&p=CHECKIN ** t=TAG Show only check-ins with the given TAG ** r=TAG Show check-ins related to TAG, equivalent to t=TAG&rel ** rel Show related check-ins as well as those matching t=TAG ** mionly Limit rel to show ancestors but not descendants ** ms=STYLE Set tag match style to EXACT, GLOB, LIKE, REGEXP ** u=USER Only show items associated with USER ** y=TYPE 'ci', 'w', 't', 'e', or (default) 'all' ** ng No Graph. ** nd Do not highlight the focus check-in ** v Show details of files changed ** f=CHECKIN Show family (immediate parents and children) of CHECKIN ** from=CHECKIN Path from... ** to=CHECKIN ... to this ** shortest ... show only the shortest path ** uf=FILE_SHA1 Show only check-ins that contain the given file version ** chng=GLOBLIST Show only check-ins that involve changes to a file whose ** name matches one of the comma-separate GLOBLIST ** brbg Background color from branch name ** ubg Background color from user ** namechng Show only check-ins that have filename changes ** forks Show only forks and their children ** ym=YYYY-MM Show only events for the given year/month ** yw=YYYY-WW Show only events for the given week of the given year ** ymd=YYYY-MM-DD Show only events on the given day ** datefmt=N Override the date format ** bisect Show the check-ins that are in the current bisect ** showid Show RIDs ** showsql Show the SQL text ** ** p= and d= can appear individually or together. If either p= or d= ** appear, then u=, y=, a=, and b= are ignored. ** ** If both a= and b= appear then both upper and lower bounds are honored. */ void page_timeline(void){ Stmt q; /* Query used to generate the timeline */ Blob sql; /* text of SQL used to generate timeline */ Blob desc; /* Description of the timeline */ int nEntry; /* Max number of entries on timeline */ int p_rid = name_to_typed_rid(P("p"),"ci"); /* artifact p and its parents */ int d_rid = name_to_typed_rid(P("d"),"ci"); /* artifact d and descendants */ int f_rid = name_to_typed_rid(P("f"),"ci"); /* artifact f and close family */ const char *zUser = P("u"); /* All entries by this user if not NULL */ const char *zType = PD("y","all"); /* Type of events. All if NULL */ const char *zAfter = P("a"); /* Events after this time */ const char *zBefore = P("b"); /* Events before this time */ const char *zCirca = P("c"); /* Events near this time */ const char *zMark = P("m"); /* Mark this event or an event this time */ const char *zTagName = P("t"); /* Show events with this tag */ const char *zBrName = P("r"); /* Equivalent to t=TAG&rel */ int related = PB("rel"); /* Show events related to zTagName */ const char *zMatchStyle = P("ms"); /* Tag/branch match style string */ MatchStyle matchStyle = MS_EXACT; /* Match style code */ const char *zMatchDesc = 0; /* Tag match expression description text */ const char *zError = 0; /* Tag match error string */ const char *zTagSql = 0; /* Tag/branch match SQL expression */ const char *zSearch = P("s"); /* Search string */ const char *zUses = P("uf"); /* Only show check-ins hold this file */ const char *zYearMonth = P("ym"); /* Show check-ins for the given YYYY-MM */ const char *zYearWeek = P("yw"); /* Check-ins for YYYY-WW (week-of-year) */ const char *zDay = P("ymd"); /* Check-ins for the day YYYY-MM-DD */ const char *zChng = P("chng"); /* List of GLOBs for files that changed */ int useDividers = P("nd")==0; /* Show dividers if "nd" is missing */ int renameOnly = P("namechng")!=0; /* Show only check-ins that rename files */ int forkOnly = PB("forks"); /* Show only forks and their children */ int bisectOnly = PB("bisect"); /* Show the check-ins of the bisect */ int tmFlags = 0; /* Timeline flags */ const char *zThisTag = 0; /* Suppress links to this tag */ const char *zThisUser = 0; /* Suppress links to this user */ HQuery url; /* URL for various branch links */ int from_rid = name_to_typed_rid(P("from"),"ci"); /* from= for paths */ int to_rid = name_to_typed_rid(P("to"),"ci"); /* to= for path timelines */ int noMerge = P("shortest")==0; /* Follow merge links if shorter */ |
︙ | ︙ | |||
1302 1303 1304 1305 1306 1307 1308 | || (bisectOnly && !g.perm.Setup) ){ login_needed(g.anon.Read && g.anon.RdTkt && g.anon.RdWiki); return; } url_initialize(&url, "timeline"); cgi_query_parameters_to_url(&url); | > > > > > > > > > > > | < | > | > | > > > > > > > | < > | > | > > | < | > > < | > | 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 | || (bisectOnly && !g.perm.Setup) ){ login_needed(g.anon.Read && g.anon.RdTkt && g.anon.RdWiki); return; } url_initialize(&url, "timeline"); cgi_query_parameters_to_url(&url); /* Convert r=TAG to t=TAG&rel. */ if( zBrName && !related ){ cgi_delete_query_parameter("r"); cgi_set_query_parameter("t", zBrName); cgi_set_query_parameter("rel", "1"); zTagName = zBrName; related = 1; } /* Ignore empty tag query strings. */ if( zTagName && !*zTagName ){ zTagName = 0; } /* Finish preliminary processing of tag match queries. */ if( zTagName ){ /* Interpet the tag style string. */ if( fossil_stricmp(zMatchStyle, "glob")==0 ){ matchStyle = MS_GLOB; }else if( fossil_stricmp(zMatchStyle, "like")==0 ){ matchStyle = MS_LIKE; }else if( fossil_stricmp(zMatchStyle, "regexp")==0 ){ matchStyle = MS_REGEXP; }else{ /* For exact maching, inhibit links to the selected tag. */ zThisTag = zTagName; } /* Display a checkbox to enable/disable display of related check-ins. */ style_submenu_checkbox("rel", "Related", 0); /* Construct the tag match expression. */ zTagSql = tagMatchExpression(matchStyle, zTagName, &zMatchDesc, &zError); } if( zMark && zMark[0]==0 ){ if( zAfter ) zMark = zAfter; if( zBefore ) zMark = zBefore; if( zCirca ) zMark = zCirca; } if( (zTagSql && db_int(0,"SELECT count(*) " "FROM tagxref NATURAL JOIN tag WHERE %s",zTagSql/*safe-for-%s*/)<=nEntry) ){ nEntry = -1; zCirca = 0; } if( zType[0]=='a' ){ tmFlags |= TIMELINE_BRIEF | TIMELINE_GRAPH; }else{ |
︙ | ︙ | |||
1438 1439 1440 1441 1442 1443 1444 | blob_append(&sql, " AND event.objid IN (0", -1); while( p ){ blob_append_sql(&sql, ",%d", p->rid); p = p->u.pTo; } blob_append(&sql, ")", -1); path_reset(); | > > > > | > | < | 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 | blob_append(&sql, " AND event.objid IN (0", -1); while( p ){ blob_append_sql(&sql, ",%d", p->rid); p = p->u.pTo; } blob_append(&sql, ")", -1); path_reset(); addFileGlobExclusion(zChng, &sql); tmFlags |= TIMELINE_DISJOINT; db_multi_exec("%s", blob_sql_text(&sql)); style_submenu_checkbox("v", "Files", zType[0]!='a' && zType[0]!='c'); blob_appendf(&desc, "%d check-ins going from ", db_int(0, "SELECT count(*) FROM timeline")); blob_appendf(&desc, "%z[%h]</a>", href("%R/info/%h", zFrom), zFrom); blob_append(&desc, " to ", -1); blob_appendf(&desc, "%z[%h]</a>", href("%R/info/%h",zTo), zTo); addFileGlobDescription(zChng, &desc); }else if( (p_rid || d_rid) && g.perm.Read ){ /* If p= or d= is present, ignore all other parameters other than n= */ char *zUuid; int np, nd; tmFlags |= TIMELINE_DISJOINT; if( p_rid && d_rid ){ |
︙ | ︙ | |||
1487 1488 1489 1490 1491 1492 1493 1494 1495 | href("%R/info/%!S", zUuid), zUuid); if( d_rid ){ if( p_rid ){ /* If both p= and d= are set, we don't have the uuid of d yet. */ zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", d_rid); } } style_submenu_entry("n","Max:",4,0); timeline_y_submenu(1); | > < < | | < < < > | 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 | href("%R/info/%!S", zUuid), zUuid); if( d_rid ){ if( p_rid ){ /* If both p= and d= are set, we don't have the uuid of d yet. */ zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", d_rid); } } style_submenu_checkbox("v", "Files", zType[0]!='a' && zType[0]!='c'); style_submenu_entry("n","Max:",4,0); timeline_y_submenu(1); }else if( f_rid && g.perm.Read ){ /* If f= is present, ignore all other parameters other than n= */ char *zUuid; db_multi_exec( "CREATE TEMP TABLE IF NOT EXISTS ok(rid INTEGER PRIMARY KEY);" "INSERT INTO ok VALUES(%d);" "INSERT OR IGNORE INTO ok SELECT pid FROM plink WHERE cid=%d;" "INSERT OR IGNORE INTO ok SELECT cid FROM plink WHERE pid=%d;", f_rid, f_rid, f_rid ); blob_append_sql(&sql, " AND event.objid IN ok"); db_multi_exec("%s", blob_sql_text(&sql)); if( useDividers ) selectedRid = f_rid; blob_appendf(&desc, "Parents and children of check-in "); zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", f_rid); blob_appendf(&desc, "%z[%S]</a>", href("%R/info/%!S", zUuid), zUuid); tmFlags |= TIMELINE_DISJOINT; style_submenu_checkbox("unhide", "Unhide", 0); style_submenu_checkbox("v", "Files", zType[0]!='a' && zType[0]!='c'); }else{ /* Otherwise, a timeline based on a span of time */ int n; const char *zEType = "timeline item"; char *zDate; Blob cond; blob_zero(&cond); addFileGlobExclusion(zChng, &cond); if( zUses ){ blob_append_sql(&cond, " AND event.objid IN usesfile "); } if( renameOnly ){ blob_append_sql(&cond, " AND event.objid IN rnfile "); } if( forkOnly ){ |
︙ | ︙ | |||
1544 1545 1546 1547 1548 1549 1550 | blob_append_sql(&cond, " AND %Q=strftime('%%Y-%%W',event.mtime) ", zYearWeek); } else if( zDay ){ blob_append_sql(&cond, " AND %Q=strftime('%%Y-%%m-%%d',event.mtime) ", zDay); } | | | | | | | | | | 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 | blob_append_sql(&cond, " AND %Q=strftime('%%Y-%%W',event.mtime) ", zYearWeek); } else if( zDay ){ blob_append_sql(&cond, " AND %Q=strftime('%%Y-%%m-%%d',event.mtime) ", zDay); } if( zTagSql ){ blob_append_sql(&cond, " AND (EXISTS(SELECT 1 FROM tagxref NATURAL JOIN tag" " WHERE %s AND tagtype>0 AND rid=blob.rid)\n", zTagSql/*safe-for-%s*/); if( related ){ /* The next two blob_appendf() calls add SQL that causes check-ins that ** are not part of the branch which are parents or children of the ** branch to be included in the report. This related check-ins are ** useful in helping to visualize what has happened on a quiescent ** branch that is infrequently merged with a much more activate branch. */ blob_append_sql(&cond, " OR EXISTS(SELECT 1 FROM plink CROSS JOIN tagxref ON rid=cid" " NATURAL JOIN tag WHERE %s AND tagtype>0 AND pid=blob.rid)\n", zTagSql/*safe-for-%s*/ ); if( (tmFlags & TIMELINE_UNHIDE)==0 ){ blob_append_sql(&cond, " AND NOT EXISTS(SELECT 1 FROM plink JOIN tagxref ON rid=cid" " WHERE tagid=%d AND tagtype>0 AND pid=blob.rid)\n", TAG_HIDDEN ); } if( P("mionly")==0 ){ blob_append_sql(&cond, " OR EXISTS(SELECT 1 FROM plink CROSS JOIN tagxref ON rid=pid" " NATURAL JOIN tag WHERE %s AND tagtype>0 AND cid=blob.rid)\n", zTagSql/*safe-for-%s*/ ); if( (tmFlags & TIMELINE_UNHIDE)==0 ){ blob_append_sql(&cond, " AND NOT EXISTS(SELECT 1 FROM plink JOIN tagxref ON rid=pid" " WHERE tagid=%d AND tagtype>0 AND cid=blob.rid)\n", TAG_HIDDEN ); |
︙ | ︙ | |||
1723 1724 1725 1726 1727 1728 1729 | blob_appendf(&desc, " in the most recent bisect"); tmFlags |= TIMELINE_DISJOINT; } if( zUser ){ blob_appendf(&desc, " by user %h", zUser); tmFlags |= TIMELINE_DISJOINT; } | | > > > > | < > | > | > > > > > | > > > | 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 | blob_appendf(&desc, " in the most recent bisect"); tmFlags |= TIMELINE_DISJOINT; } if( zUser ){ blob_appendf(&desc, " by user %h", zUser); tmFlags |= TIMELINE_DISJOINT; } if( zTagSql ){ if( matchStyle==MS_EXACT ){ if( related ){ blob_appendf(&desc, " related to %h", zMatchDesc); }else{ blob_appendf(&desc, " tagged with %h", zMatchDesc); } }else{ if( related ){ blob_appendf(&desc, " related to tags matching %h", zMatchDesc); }else{ blob_appendf(&desc, " with tags matching %h", zMatchDesc); } } tmFlags |= TIMELINE_DISJOINT; } addFileGlobDescription(zChng, &desc); if( rAfter>0.0 ){ if( rBefore>0.0 ){ blob_appendf(&desc, " occurring between %h and %h.<br />", zAfter, zBefore); }else{ blob_appendf(&desc, " occurring on or after %h.<br />", zAfter); } }else if( rBefore>0.0 ){ blob_appendf(&desc, " occurring on or before %h.<br />", zBefore); }else if( rCirca>0.0 ){ blob_appendf(&desc, " occurring around %h.<br />", zCirca); } if( zSearch ){ blob_appendf(&desc, " matching \"%h\"", zSearch); } if( g.perm.Hyperlink ){ static const char *const azMatchStyles[] = { "exact", "Exact", "glob", "Glob", "like", "Like", "regexp", "Regexp" }; double rDate; zDate = db_text(0, "SELECT min(timestamp) FROM timeline /*scan*/"); if( (!zDate || !zDate[0]) && ( zAfter || zBefore ) ){ zDate = mprintf("%s", (zAfter ? zAfter : zBefore)); } if( zDate ){ rDate = symbolic_name_to_mtime(zDate); |
︙ | ︙ | |||
1779 1780 1781 1782 1783 1784 1785 | rDate+ONE_SECOND, blob_sql_text(&cond)) ){ timeline_submenu(&url, "Newer", "a", zDate, "b"); } free(zDate); } if( zType[0]=='a' || zType[0]=='c' ){ | < | | < > | < > | > > > > > > | 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 | rDate+ONE_SECOND, blob_sql_text(&cond)) ){ timeline_submenu(&url, "Newer", "a", zDate, "b"); } free(zDate); } if( zType[0]=='a' || zType[0]=='c' ){ style_submenu_checkbox("unhide", "Unhide", 0); } style_submenu_checkbox("v", "Files", zType[0]!='a' && zType[0]!='c'); style_submenu_entry("n","Max:",4,0); timeline_y_submenu(disableY); style_submenu_entry("t", "Tag Filter:", -8, 0); style_submenu_multichoice("ms", count(azMatchStyles)/2, azMatchStyles, 0); } blob_zero(&cond); } if( PB("showsql") ){ @ <pre>%h(blob_sql_text(&sql))</pre> } if( search_restrict(SRCH_CKIN)!=0 ){ style_submenu_element("Search", "%R/search?y=c"); } if( PB("showid") ) tmFlags |= TIMELINE_SHOWRID; if( useDividers && zMark && zMark[0] ){ double r = symbolic_name_to_mtime(zMark); if( r>0.0 ) selectedRid = timeline_add_divider(r); } blob_zero(&sql); db_prepare(&q, "SELECT * FROM timeline ORDER BY sortby DESC /*scan*/"); @ <h2>%b(&desc)</h2> blob_reset(&desc); /* Report any errors. */ if( zError ){ @ <p class="generalError">%h(zError)</p> } www_print_timeline(&q, tmFlags, zThisUser, zThisTag, selectedRid, 0); db_finalize(&q); if( zOlderButton ){ @ %z(xhref("class='button'","%z",zOlderButton))Older</a> } style_footer(); } |
︙ | ︙ | |||
1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 | static int isIsoDate(const char *z){ return strlen(z)==10 && z[4]=='-' && z[7]=='-' && fossil_isdigit(z[0]) && fossil_isdigit(z[5]); } /* ** COMMAND: timeline ** ** Usage: %fossil timeline ?WHEN? ?CHECKIN|DATETIME? ?OPTIONS? ** ** Print a summary of activity going backwards in date and time ** specified or from the current date and time if no arguments ** are given. The WHEN argument can be any unique abbreviation ** of one of these keywords: ** ** before ** after ** descendants | children ** ancestors | parents ** | > > > > > > > | < | > | > > > > > | 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 | static int isIsoDate(const char *z){ return strlen(z)==10 && z[4]=='-' && z[7]=='-' && fossil_isdigit(z[0]) && fossil_isdigit(z[5]); } /* ** Return true if the input string can be converted to a julianday. */ static int fossil_is_julianday(const char *zDate){ return db_int(0, "SELECT EXISTS (SELECT julianday(%Q) AS jd WHERE jd IS NOT NULL)", zDate); } /* ** COMMAND: timeline ** ** Usage: %fossil timeline ?WHEN? ?CHECKIN|DATETIME? ?OPTIONS? ** ** Print a summary of activity going backwards in date and time ** specified or from the current date and time if no arguments ** are given. The WHEN argument can be any unique abbreviation ** of one of these keywords: ** ** before ** after ** descendants | children ** ancestors | parents ** ** The CHECKIN can be any unique prefix of 4 characters or more. You ** can also say "current" for the current version. ** ** DATETIME may be "now" or "YYYY-MM-DDTHH:MM:SS.SSS". If in ** year-month-day form, it may be truncated, the "T" may be replaced by ** a space, and it may also name a timezone offset from UTC as "-HH:MM" ** (westward) or "+HH:MM" (eastward). Either no timezone suffix or "Z" ** means UTC. ** ** ** Options: ** -n|--limit N Output the first N entries (default 20 lines). ** N=0 means no limit. ** -p|--path PATH Output items affecting PATH only. ** PATH can be a file or a sub directory. ** --offset P skip P changes |
︙ | ︙ | |||
2047 2048 2049 2050 2051 2052 2053 | const char *zOffset; const char *zType; char *zOrigin; char *zDate; Blob sql; int objid = 0; Blob uuid; | | | 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 | const char *zOffset; const char *zType; char *zOrigin; char *zDate; Blob sql; int objid = 0; Blob uuid; int mode = TIMELINE_MODE_NONE; int verboseFlag = 0 ; int iOffset; const char *zFilePattern = 0; Blob treeName; verboseFlag = find_option("verbose","v", 0)!=0; if( !verboseFlag){ |
︙ | ︙ | |||
2088 2089 2090 2091 2092 2093 2094 | /* We should be done with options.. */ verify_all_options(); if( g.argc>=4 ){ k = strlen(g.argv[2]); if( strncmp(g.argv[2],"before",k)==0 ){ | | | | | | | | | | | | > > | | | | | | 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 | /* We should be done with options.. */ verify_all_options(); if( g.argc>=4 ){ k = strlen(g.argv[2]); if( strncmp(g.argv[2],"before",k)==0 ){ mode = TIMELINE_MODE_BEFORE; }else if( strncmp(g.argv[2],"after",k)==0 && k>1 ){ mode = TIMELINE_MODE_AFTER; }else if( strncmp(g.argv[2],"descendants",k)==0 ){ mode = TIMELINE_MODE_CHILDREN; }else if( strncmp(g.argv[2],"children",k)==0 ){ mode = TIMELINE_MODE_CHILDREN; }else if( strncmp(g.argv[2],"ancestors",k)==0 && k>1 ){ mode = TIMELINE_MODE_PARENTS; }else if( strncmp(g.argv[2],"parents",k)==0 ){ mode = TIMELINE_MODE_PARENTS; }else if(!zType && !zLimit){ usage("?WHEN? ?CHECKIN|DATETIME? ?-n|--limit #? ?-t|--type TYPE? " "?-W|--width WIDTH? ?-p|--path PATH"); } if( '-' != *g.argv[3] ){ zOrigin = g.argv[3]; }else{ zOrigin = "now"; } }else if( g.argc==3 ){ zOrigin = g.argv[2]; }else{ zOrigin = "now"; } k = strlen(zOrigin); blob_zero(&uuid); blob_append(&uuid, zOrigin, -1); if( fossil_strcmp(zOrigin, "now")==0 ){ if( mode==TIMELINE_MODE_CHILDREN || mode==TIMELINE_MODE_PARENTS ){ fossil_fatal("cannot compute descendants or ancestors of a date"); } zDate = mprintf("(SELECT datetime('now'))"); }else if( strncmp(zOrigin, "current", k)==0 ){ if( !g.localOpen ){ fossil_fatal("must be within a local checkout to use 'current'"); } objid = db_lget_int("checkout",0); zDate = mprintf("(SELECT mtime FROM plink WHERE cid=%d)", objid); }else if( name_to_uuid(&uuid, 0, "*")==0 ){ objid = db_int(0, "SELECT rid FROM blob WHERE uuid=%B", &uuid); zDate = mprintf("(SELECT mtime FROM event WHERE objid=%d)", objid); }else if( fossil_is_julianday(zOrigin) ){ const char *zShift = ""; if( mode==TIMELINE_MODE_CHILDREN || mode==TIMELINE_MODE_PARENTS ){ fossil_fatal("cannot compute descendants or ancestors of a date"); } if( mode==TIMELINE_MODE_NONE ){ if( isIsoDate(zOrigin) ) zShift = ",'+1 day'"; } zDate = mprintf("(SELECT julianday(%Q%s, fromLocal()))", zOrigin, zShift); }else{ fossil_fatal("unknown check-in or invalid date: %s", zOrigin); } if( zFilePattern ){ if( zType==0 ){ /* When zFilePattern is specified and type is not specified, only show * file check-ins */ zType="ci"; } file_tree_name(zFilePattern, &treeName, 0, 1); if( fossil_strcmp(blob_str(&treeName), ".")==0 ){ /* When zTreeName refers to g.zLocalRoot, it's like not specifying * zFilePattern. */ zFilePattern = 0; } } if( mode==TIMELINE_MODE_NONE ) mode = TIMELINE_MODE_BEFORE; blob_zero(&sql); blob_append(&sql, timeline_query_for_tty(), -1); blob_append_sql(&sql, "\n AND event.mtime %s %s", ( mode==TIMELINE_MODE_BEFORE || mode==TIMELINE_MODE_PARENTS ) ? "<=" : ">=", zDate /*safe-for-%s*/ ); if( mode==TIMELINE_MODE_CHILDREN || mode==TIMELINE_MODE_PARENTS ){ db_multi_exec("CREATE TEMP TABLE ok(rid INTEGER PRIMARY KEY)"); if( mode==TIMELINE_MODE_CHILDREN ){ compute_descendants(objid, n); }else{ compute_ancestors(objid, n, 0); } blob_append_sql(&sql, "\n AND blob.rid IN ok"); } if( zType && (zType[0]!='a') ){ |
︙ | ︙ |
Changes to src/tkt.c.
︙ | ︙ | |||
214 215 216 217 218 219 220 | for(i=0; i<p->nField; i++){ const char *zName = p->aField[i].zName; const char *zBaseName = zName[0]=='+' ? zName+1 : zName; j = fieldId(zBaseName); if( j<0 ) continue; aUsed[j] = 1; if( aField[j].mUsed & USEDBY_TICKET ){ | > | | | | > > > > | | 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 | for(i=0; i<p->nField; i++){ const char *zName = p->aField[i].zName; const char *zBaseName = zName[0]=='+' ? zName+1 : zName; j = fieldId(zBaseName); if( j<0 ) continue; aUsed[j] = 1; if( aField[j].mUsed & USEDBY_TICKET ){ const char *zUsedByName = zName; if( zUsedByName[0]=='+' ){ zUsedByName++; blob_append_sql(&sql1,", \"%w\"=coalesce(\"%w\",'') || %Q", zUsedByName, zUsedByName, p->aField[i].zValue); }else{ blob_append_sql(&sql1,", \"%w\"=%Q", zUsedByName, p->aField[i].zValue); } } if( aField[j].mUsed & USEDBY_TICKETCHNG ){ const char *zUsedByName = zName; if( zUsedByName[0]=='+' ){ zUsedByName++; } blob_append_sql(&sql2, ",\"%w\"", zUsedByName); blob_append_sql(&sql3, ",%Q", p->aField[i].zValue); } if( rid>0 ){ wiki_extract_links(p->aField[i].zValue, rid, 1, p->rDate, i==0, 0); } } blob_append_sql(&sql1, " WHERE tkt_id=%d", tktid); |
︙ | ︙ | |||
421 422 423 424 425 426 427 | /* ** For trouble-shooting purposes, render a dump of the aField[] table to ** the webpage currently under construction. */ static void showAllFields(void){ int i; | | | < | | < | < | < | < | < | | < | 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 | /* ** For trouble-shooting purposes, render a dump of the aField[] table to ** the webpage currently under construction. */ static void showAllFields(void){ int i; @ <div style="color:blue"> @ <p>Database fields:</p><ul> for(i=0; i<nField; i++){ @ <li>aField[%d(i)].zName = "%h(aField[i].zName)"; @ originally = "%h(aField[i].zValue)"; @ currently = "%h(PD(aField[i].zName,""))"; if( aField[i].zAppend ){ @ zAppend = "%h(aField[i].zAppend)"; } @ mUsed = %d(aField[i].mUsed); } @ </ul></div> } /* ** WEBPAGE: tktview ** URL: tktview?name=UUID ** ** View a ticket identified by the name= query parameter. */ void tktview_page(void){ const char *zScript; char *zFullName; const char *zUuid = PD("name",""); login_check_credentials(); if( !g.perm.RdTkt ){ login_needed(g.anon.RdTkt); return; } if( g.anon.WrTkt || g.anon.ApndTkt ){ style_submenu_element("Edit", "%s/tktedit?name=%T", g.zTop, PD("name","")); } if( g.perm.Hyperlink ){ style_submenu_element("History", "%s/tkthistory/%T", g.zTop, zUuid); style_submenu_element("Timeline", "%s/tkttimeline/%T", g.zTop, zUuid); style_submenu_element("Check-ins", "%s/tkttimeline/%T?y=ci", g.zTop, zUuid); } if( g.anon.NewTkt ){ style_submenu_element("New Ticket", "%s/tktnew", g.zTop); } if( g.anon.ApndTkt && g.anon.Attach ){ style_submenu_element("Attach", "%s/attachadd?tkt=%T&from=%s/tktview/%t", g.zTop, zUuid, g.zTop, zUuid); } if( P("plaintext") ){ style_submenu_element("Formatted", "%R/tktview/%s", zUuid); }else{ style_submenu_element("Plaintext", "%R/tktview/%s?plaintext", zUuid); } style_header("View Ticket"); if( g.thTrace ) Th_Trace("BEGIN_TKTVIEW<br />\n", -1); ticket_init(); initializeVariablesFromCGI(); getAllTicketFields(); initializeVariablesFromDb(); |
︙ | ︙ | |||
544 545 546 547 548 549 550 | */ static int ticket_put( Blob *pTicket, /* The text of the ticket change record */ const char *zTktId, /* The ticket to which this change is applied */ int needMod /* True if moderation is needed */ ){ int result; | > > | < | 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 | */ static int ticket_put( Blob *pTicket, /* The text of the ticket change record */ const char *zTktId, /* The ticket to which this change is applied */ int needMod /* True if moderation is needed */ ){ int result; int rid; manifest_crosslink_begin(); rid = content_put_ex(pTicket, 0, 0, 0, needMod); if( rid==0 ){ fossil_fatal("trouble committing ticket: %s", g.zErrMsg); } if( needMod ){ moderation_table_create(); db_multi_exec( "INSERT INTO modreq(objid, tktid) VALUES(%d,%Q)", rid, zTktId ); }else{ db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d);", rid); db_multi_exec("INSERT OR IGNORE INTO unclustered VALUES(%d);", rid); } result = (manifest_crosslink(rid, pTicket, MC_NONE)==0); assert( blob_is_reset(pTicket) ); if( !result ){ result = manifest_crosslink_end(MC_PERMIT_HOOKS); }else{ manifest_crosslink_end(MC_NONE); } |
︙ | ︙ | |||
650 651 652 653 654 655 656 | blob_reset(&tktchng); return TH_OK; } needMod = ticket_need_moderation(0); if( g.zPath[0]=='d' ){ const char *zNeedMod = needMod ? "required" : "skipped"; /* If called from /debug_tktnew or /debug_tktedit... */ | | > | | 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 | blob_reset(&tktchng); return TH_OK; } needMod = ticket_need_moderation(0); if( g.zPath[0]=='d' ){ const char *zNeedMod = needMod ? "required" : "skipped"; /* If called from /debug_tktnew or /debug_tktedit... */ @ <div style="color:blue"> @ <p>Ticket artifact that would have been submitted:</p> @ <blockquote><pre>%h(blob_str(&tktchng))</pre></blockquote> @ <blockquote><pre>Moderation would be %h(zNeedMod).</pre></blockquote> @ </div> @ <hr /> return TH_OK; }else{ if( g.thTrace ){ Th_Trace("submit_ticket {\n<blockquote><pre>\n%h\n</pre></blockquote>\n" "}<br />\n", blob_str(&tktchng)); } |
︙ | ︙ | |||
847 848 849 850 851 852 853 | if( !g.perm.Hyperlink || !g.perm.RdTkt ){ login_needed(g.anon.Hyperlink && g.anon.RdTkt); return; } zUuid = PD("name",""); zType = PD("y","a"); if( zType[0]!='c' ){ | | | | < | < | < | 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 | if( !g.perm.Hyperlink || !g.perm.RdTkt ){ login_needed(g.anon.Hyperlink && g.anon.RdTkt); return; } zUuid = PD("name",""); zType = PD("y","a"); if( zType[0]!='c' ){ style_submenu_element("Check-ins", "%s/tkttimeline?name=%T&y=ci", g.zTop, zUuid); }else{ style_submenu_element("Timeline", "%s/tkttimeline?name=%T", g.zTop, zUuid); } style_submenu_element("History", "%s/tkthistory/%s", g.zTop, zUuid); style_submenu_element("Status", "%s/info/%s", g.zTop, zUuid); if( zType[0]=='c' ){ zTitle = mprintf("Check-ins Associated With Ticket %h", zUuid); }else{ zTitle = mprintf("Timeline Of Ticket %h", zUuid); } style_header("%z", zTitle); |
︙ | ︙ | |||
922 923 924 925 926 927 928 | login_check_credentials(); if( !g.perm.Hyperlink || !g.perm.RdTkt ){ login_needed(g.anon.Hyperlink && g.anon.RdTkt); return; } zUuid = PD("name",""); zTitle = mprintf("History Of Ticket %h", zUuid); | | < | | | < | < | < | 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 | login_check_credentials(); if( !g.perm.Hyperlink || !g.perm.RdTkt ){ login_needed(g.anon.Hyperlink && g.anon.RdTkt); return; } zUuid = PD("name",""); zTitle = mprintf("History Of Ticket %h", zUuid); style_submenu_element("Status", "%s/info/%s", g.zTop, zUuid); style_submenu_element("Check-ins", "%s/tkttimeline?name=%s&y=ci", g.zTop, zUuid); style_submenu_element("Timeline", "%s/tkttimeline?name=%s", g.zTop, zUuid); if( P("plaintext")!=0 ){ style_submenu_element("Formatted", "%R/tkthistory/%s", zUuid); }else{ style_submenu_element("Plaintext", "%R/tkthistory/%s?plaintext", zUuid); } style_header("%z", zTitle); tagid = db_int(0, "SELECT tagid FROM tag WHERE tagname GLOB 'tkt-%q*'",zUuid); if( tagid==0 ){ @ No such ticket: %h(zUuid) style_footer(); |
︙ | ︙ | |||
1077 1078 1079 1080 1081 1082 1083 | ** ** If TICKETFILTER is given on the commandline, the query is ** limited with a new WHERE-condition. ** example: Report lists a column # with the uuid ** TICKETFILTER may be [#]='uuuuuuuuu' ** example: Report only lists rows with status not open ** TICKETFILTER: status != 'open' | | | 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 | ** ** If TICKETFILTER is given on the commandline, the query is ** limited with a new WHERE-condition. ** example: Report lists a column # with the uuid ** TICKETFILTER may be [#]='uuuuuuuuu' ** example: Report only lists rows with status not open ** TICKETFILTER: status != 'open' ** ** If --quote is used, the tickets are encoded by quoting special ** chars (space -> \\s, tab -> \\t, newline -> \\n, cr -> \\r, ** formfeed -> \\f, vtab -> \\v, nul -> \\0, \\ -> \\\\). ** Otherwise, the simplified encoding as on the show report raw page ** in the GUI is used. This has no effect in JSON mode. ** ** Instead of the report title it's possible to use the report |
︙ | ︙ | |||
1383 1384 1385 1386 1387 1388 1389 | } } blob_appendf(&tktchng, "K %s\n", zTktUuid); blob_appendf(&tktchng, "U %F\n", zUser); md5sum_blob(&tktchng, &cksum); blob_appendf(&tktchng, "Z %b\n", &cksum); if( ticket_put(&tktchng, zTktUuid, ticket_need_moderation(1)) ){ | | | 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 | } } blob_appendf(&tktchng, "K %s\n", zTktUuid); blob_appendf(&tktchng, "U %F\n", zUser); md5sum_blob(&tktchng, &cksum); blob_appendf(&tktchng, "Z %b\n", &cksum); if( ticket_put(&tktchng, zTktUuid, ticket_need_moderation(1)) ){ fossil_fatal("%s", g.zErrMsg); }else{ fossil_print("ticket %s succeeded for %s\n", (eCmd==set?"set":"add"),zTktUuid); } } } } |
︙ | ︙ | |||
1407 1408 1409 1410 1411 1412 1413 | #endif /* ** Add some standard submenu elements for ticket screens. */ void ticket_standard_submenu(unsigned int ok){ if( (ok & T_SRCH)!=0 && search_restrict(SRCH_TKT)!=0 ){ | | | | | 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 | #endif /* ** Add some standard submenu elements for ticket screens. */ void ticket_standard_submenu(unsigned int ok){ if( (ok & T_SRCH)!=0 && search_restrict(SRCH_TKT)!=0 ){ style_submenu_element("Search", "%R/tktsrch"); } if( (ok & T_REPLIST)!=0 ){ style_submenu_element("Reports", "%R/reportlist"); } if( (ok & T_NEW)!=0 && g.anon.NewTkt ){ style_submenu_element("New", "%R/tktnew"); } } /* ** WEBPAGE: ticket ** ** This is intended to be the primary "Ticket" page. Render as |
︙ | ︙ |
Changes to src/tktsetup.c.
︙ | ︙ | |||
522 523 524 525 526 527 528 | @ set alwaysPlaintext [info exists plaintext] @ query {SELECT datetime(tkt_mtime) AS xdate, login AS xlogin, @ mimetype as xmimetype, icomment AS xcomment, @ username AS xusername @ FROM ticketchng @ WHERE tkt_id=$tkt_id AND length(icomment)>0} { @ if {$seenRow} { | | | 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 | @ set alwaysPlaintext [info exists plaintext] @ query {SELECT datetime(tkt_mtime) AS xdate, login AS xlogin, @ mimetype as xmimetype, icomment AS xcomment, @ username AS xusername @ FROM ticketchng @ WHERE tkt_id=$tkt_id AND length(icomment)>0} { @ if {$seenRow} { @ html "<hr />\n" @ } else { @ html "<tr><td class='tktDspLabel'>User Comments:</td></tr>\n" @ html "<tr><td colspan='5' class='tktDspValue'>\n" @ set seenRow 1 @ } @ html "[htmlize $xlogin]" @ if {$xlogin ne $xusername && [string length $xusername]>0} { |
︙ | ︙ | |||
649 650 651 652 653 654 655 | @ <input type="text" name="username" value="$<username>" size="30" />:<br /> @ <textarea name="icomment" cols="80" rows="15" @ wrap="virtual" class="wikiedit">$<icomment></textarea> @ </td></tr> @ @ <th1>enable_output [info exists preview]</th1> @ <tr><td colspan="2"> | | | | 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 | @ <input type="text" name="username" value="$<username>" size="30" />:<br /> @ <textarea name="icomment" cols="80" rows="15" @ wrap="virtual" class="wikiedit">$<icomment></textarea> @ </td></tr> @ @ <th1>enable_output [info exists preview]</th1> @ <tr><td colspan="2"> @ Description Preview:<br /><hr /> @ <th1> @ if {$mutype eq "Wiki"} { @ wiki $icomment @ } elseif {$mutype eq "Plain Text"} { @ set r [randhex] @ wiki "<verbatim-$r>\n[string trimright $icomment]\n</verbatim-$r>" @ } elseif {$mutype eq {[links only]}} { @ set r [randhex] @ wiki "<verbatim-$r links>\n[string trimright $icomment]</verbatim-$r>" @ } else { @ wiki "<nowiki>\n[string trimright $icomment]\n</nowiki>" @ } @ </th1> @ <hr /> @ </td></tr> @ <th1>enable_output 1</th1> @ @ <tr> @ <td align="right"> @ <input type="submit" name="preview" value="Preview" /> @ </td> |
︙ | ︙ |
Changes to src/translate.c.
︙ | ︙ | |||
44 45 46 47 48 49 50 51 52 53 54 55 56 57 | ** ** Comments of the form: "|* @-comment: CC" (where "|" is really "/") ** cause CC to become a comment character for the @-substitution. ** Typical values for CC are "--" (for SQL text) or "#" (for Tcl script) ** or "//" (for C++ code). Lines of subsequent @-blocks that begin with ** CC are omitted from the output. ** */ #include <stdio.h> #include <ctype.h> #include <stdlib.h> #include <string.h> /* | > > > > > | 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 | ** ** Comments of the form: "|* @-comment: CC" (where "|" is really "/") ** cause CC to become a comment character for the @-substitution. ** Typical values for CC are "--" (for SQL text) or "#" (for Tcl script) ** or "//" (for C++ code). Lines of subsequent @-blocks that begin with ** CC are omitted from the output. ** ** Enhancement #3: ** ** If a non-enhancement #1 line ends in backslash, the backslash and the ** newline (\n) are not included in the argument to cgi_printf(). This ** is used to split one long output line across multiple source lines. */ #include <stdio.h> #include <ctype.h> #include <stdlib.h> #include <string.h> /* |
︙ | ︙ | |||
139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 | }else{ /* Otherwise (if the last non-whitespace was not '=') then generate ** a cgi_printf() statement whose format is the text following the '@'. ** Substrings of the form "%C(...)" (where C is any sequence of ** characters other than \000 and '(') will put "%C" in the ** format and add the "(...)" as an argument to the cgi_printf call. */ int indent; int nC; char c; i++; if( isspace(zLine[i]) ){ i++; } indent = i; for(j=0; zLine[i] && zLine[i]!='\r' && zLine[i]!='\n'; i++){ if( zLine[i]=='"' || zLine[i]=='\\' ){ zOut[j++] = '\\'; } zOut[j++] = zLine[i]; if( zLine[i]!='%' || zLine[i+1]=='%' || zLine[i+1]==0 ) continue; for(nC=1; zLine[i+nC] && zLine[i+nC]!='('; nC++){} if( zLine[i+nC]!='(' || !isalpha(zLine[i+nC-1]) ) continue; while( --nC ) zOut[j++] = zLine[++i]; zArg[nArg++] = ','; | > > > > > > | 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 | }else{ /* Otherwise (if the last non-whitespace was not '=') then generate ** a cgi_printf() statement whose format is the text following the '@'. ** Substrings of the form "%C(...)" (where C is any sequence of ** characters other than \000 and '(') will put "%C" in the ** format and add the "(...)" as an argument to the cgi_printf call. */ const char *zNewline = "\\n"; int indent; int nC; char c; i++; if( isspace(zLine[i]) ){ i++; } indent = i; for(j=0; zLine[i] && zLine[i]!='\r' && zLine[i]!='\n'; i++){ if( zLine[i]=='\\' && (!zLine[i+1] || zLine[i+1]=='\r' || zLine[i+1]=='\n') ){ zNewline = ""; break; } if( zLine[i]=='"' || zLine[i]=='\\' ){ zOut[j++] = '\\'; } zOut[j++] = zLine[i]; if( zLine[i]!='%' || zLine[i+1]=='%' || zLine[i+1]==0 ) continue; for(nC=1; zLine[i+nC] && zLine[i+nC]!='('; nC++){} if( zLine[i+nC]!='(' || !isalpha(zLine[i+nC-1]) ) continue; while( --nC ) zOut[j++] = zLine[++i]; zArg[nArg++] = ','; |
︙ | ︙ | |||
167 168 169 170 171 172 173 | k++; } i++; } } zOut[j] = 0; if( !inPrint ){ | | | | 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 | k++; } i++; } } zOut[j] = 0; if( !inPrint ){ fprintf(out,"%*scgi_printf(\"%s%s\"",indent-2,"", zOut, zNewline); inPrint = 1; }else{ fprintf(out,"\n%*s\"%s%s\"",indent+5, "", zOut, zNewline); } } } } int main(int argc, char **argv){ if( argc==2 ){ |
︙ | ︙ |
Changes to src/undo.c.
︙ | ︙ | |||
223 224 225 226 227 228 229 | } /* ** Begin capturing a snapshot that can be undone. */ void undo_begin(void){ int cid; | < | | | | | 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 | } /* ** Begin capturing a snapshot that can be undone. */ void undo_begin(void){ int cid; static const char zSql[] = @ CREATE TABLE localdb.undo( @ pathname TEXT UNIQUE, -- Name of the file @ redoflag BOOLEAN, -- 0 for undoable. 1 for redoable @ existsflag BOOLEAN, -- True if the file exists @ isExe BOOLEAN, -- True if the file is executable @ isLink BOOLEAN, -- True if the file is symlink @ content BLOB -- Saved content @ ); @ CREATE TABLE localdb.undo_vfile AS SELECT * FROM vfile; @ CREATE TABLE localdb.undo_vmerge AS SELECT * FROM vmerge; ; if( undoDisable ) return; undo_reset(); db_multi_exec(zSql/*works-like:""*/); cid = db_lget_int("checkout", 0); db_lset_int("undo_checkout", cid); db_lset_int("undo_available", 1); db_lset("undo_cmdline", undoCmd); undoActive = 1; } |
︙ | ︙ | |||
375 376 377 378 379 380 381 | return zRc; } /* ** Make the current state of stashid undoable. */ void undo_save_stash(int stashid){ | < | | | | | 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 | return zRc; } /* ** Make the current state of stashid undoable. */ void undo_save_stash(int stashid){ db_multi_exec( "CREATE TABLE IF NOT EXISTS localdb.undo_stash" " AS SELECT * FROM stash WHERE 0;" "INSERT INTO undo_stash" " SELECT * FROM stash WHERE stashid=%d;", stashid ); db_multi_exec( "CREATE TABLE IF NOT EXISTS localdb.undo_stashfile" " AS SELECT * FROM stashfile WHERE 0;" "INSERT INTO undo_stashfile" " SELECT * FROM stashfile WHERE stashid=%d;", stashid ); } /* ** Complete the undo process is one is currently in process. */ void undo_finish(void){ |
︙ | ︙ | |||
477 478 479 480 481 482 483 484 | undo_available = db_lget_int("undo_available", 0); if( dryRunFlag ){ if( undo_available==0 ){ fossil_print("No undo or redo is available\n"); }else{ Stmt q; int nChng = 0; zCmd = undo_available==1 ? "undo" : "redo"; | > | | | 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 | undo_available = db_lget_int("undo_available", 0); if( dryRunFlag ){ if( undo_available==0 ){ fossil_print("No undo or redo is available\n"); }else{ Stmt q; int nChng = 0; const char *zArticle = undo_available==1 ? "An" : "A"; zCmd = undo_available==1 ? "undo" : "redo"; fossil_print("%s %s is available for the following command:\n\n" " %s %s\n\n", zArticle, zCmd, g.argv[0], db_lget("undo_cmdline", "???")); db_prepare(&q, "SELECT existsflag, pathname FROM undo ORDER BY pathname" ); while( db_step(&q)==SQLITE_ROW ){ if( nChng==0 ){ fossil_print("The following file changes would occur if the " "command above is %sne:\n\n", zCmd); |
︙ | ︙ |
Changes to src/unicode.c.
︙ | ︙ | |||
11 12 13 14 15 16 17 | ** ** Author contact information: ** drh@hwaci.com ** http://www.hwaci.com/drh/ ** ******************************************************************************* ** | | | 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | ** ** Author contact information: ** drh@hwaci.com ** http://www.hwaci.com/drh/ ** ******************************************************************************* ** ** This file is copied from ext/fts5/fts5_unicode2.c of SQLite3 with ** minor changes. */ #include "config.h" #include "unicode.h" /* ** Return true if the argument corresponds to a unicode codepoint |
︙ | ︙ | |||
47 48 49 50 51 52 53 | 0x000BBC81, 0x000DD401, 0x000DF801, 0x000E1002, 0x000E1C01, 0x000FD801, 0x00120808, 0x00156806, 0x00162402, 0x00163403, 0x00164437, 0x0017CC02, 0x0018001D, 0x00187802, 0x00192C15, 0x0019A804, 0x0019C001, 0x001B5001, 0x001B580F, 0x001B9C07, 0x001BF402, 0x001C000E, 0x001C3C01, 0x001C4401, 0x001CC01B, 0x001E980B, 0x001FAC09, 0x001FD804, 0x00205804, 0x00206C09, 0x00209403, 0x0020A405, 0x0020C00F, 0x00216403, 0x00217801, | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | > | | > > | | | | | > | | | | | > | | | | | 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 | 0x000BBC81, 0x000DD401, 0x000DF801, 0x000E1002, 0x000E1C01, 0x000FD801, 0x00120808, 0x00156806, 0x00162402, 0x00163403, 0x00164437, 0x0017CC02, 0x0018001D, 0x00187802, 0x00192C15, 0x0019A804, 0x0019C001, 0x001B5001, 0x001B580F, 0x001B9C07, 0x001BF402, 0x001C000E, 0x001C3C01, 0x001C4401, 0x001CC01B, 0x001E980B, 0x001FAC09, 0x001FD804, 0x00205804, 0x00206C09, 0x00209403, 0x0020A405, 0x0020C00F, 0x00216403, 0x00217801, 0x00235030, 0x0024E803, 0x0024F812, 0x00254407, 0x00258804, 0x0025C001, 0x00260403, 0x0026F001, 0x0026F807, 0x00271C02, 0x00272C03, 0x00275C01, 0x00278802, 0x0027C802, 0x0027E802, 0x00280403, 0x0028F001, 0x0028F805, 0x00291C02, 0x00292C03, 0x00294401, 0x0029C002, 0x0029D401, 0x002A0403, 0x002AF001, 0x002AF808, 0x002B1C03, 0x002B2C03, 0x002B8802, 0x002BC002, 0x002C0403, 0x002CF001, 0x002CF807, 0x002D1C02, 0x002D2C03, 0x002D5802, 0x002D8802, 0x002DC001, 0x002E0801, 0x002EF805, 0x002F1803, 0x002F2804, 0x002F5C01, 0x002FCC08, 0x00300004, 0x0030F807, 0x00311803, 0x00312804, 0x00315402, 0x00318802, 0x0031FC01, 0x00320403, 0x0032F001, 0x0032F807, 0x00331803, 0x00332804, 0x00335402, 0x00338802, 0x00340403, 0x0034F807, 0x00351803, 0x00352804, 0x00353C01, 0x00355C01, 0x00358802, 0x0035E401, 0x00360802, 0x00372801, 0x00373C06, 0x00375801, 0x00376008, 0x0037C803, 0x0038C401, 0x0038D007, 0x0038FC01, 0x00391C09, 0x00396802, 0x003AC401, 0x003AD006, 0x003AEC02, 0x003B2006, 0x003C041F, 0x003CD00C, 0x003DC417, 0x003E340B, 0x003E6424, 0x003EF80F, 0x003F380D, 0x0040AC14, 0x00412806, 0x00415804, 0x00417803, 0x00418803, 0x00419C07, 0x0041C404, 0x0042080C, 0x00423C01, 0x00426806, 0x0043EC01, 0x004D740C, 0x004E400A, 0x00500001, 0x0059B402, 0x005A0001, 0x005A6C02, 0x005BAC03, 0x005C4803, 0x005CC805, 0x005D4802, 0x005DC802, 0x005ED023, 0x005F6004, 0x005F7401, 0x0060000F, 0x00621402, 0x0062A401, 0x0064800C, 0x0064C00C, 0x00650001, 0x00651002, 0x00677822, 0x00685C05, 0x00687802, 0x0069540A, 0x0069801D, 0x0069FC01, 0x006A8007, 0x006AA006, 0x006AC00F, 0x006C0005, 0x006CD011, 0x006D6823, 0x006E0003, 0x006E840D, 0x006F980E, 0x006FF004, 0x00709014, 0x0070EC05, 0x0071F802, 0x00730008, 0x00734019, 0x0073B401, 0x0073C803, 0x0073E002, 0x00770036, 0x0077EC05, 0x007EF401, 0x007EFC03, 0x007F3403, 0x007F7403, 0x007FB403, 0x007FF402, 0x00800065, 0x0081980A, 0x0081E805, 0x00822805, 0x0082801F, 0x00834021, 0x00840002, 0x00840C04, 0x00842002, 0x00845001, 0x00845803, 0x00847806, 0x00849401, 0x00849C01, 0x0084A401, 0x0084B801, 0x0084E802, 0x00850005, 0x00852804, 0x00853C01, 0x00862802, 0x0086426F, 0x00900027, 0x0091000B, 0x0092704E, 0x00940276, 0x009E53E0, 0x00ADD820, 0x00AE6022, 0x00AEF40C, 0x00AF2808, 0x00AFB004, 0x00B39406, 0x00B3BC03, 0x00B3E404, 0x00B3F802, 0x00B5C001, 0x00B5FC01, 0x00B7804F, 0x00B8C015, 0x00BA001A, 0x00BA6C59, 0x00BC00D6, 0x00BFC00C, 0x00C00005, 0x00C02019, 0x00C0A807, 0x00C0D802, 0x00C0F403, 0x00C26404, 0x00C28001, 0x00C3EC01, 0x00C64002, 0x00C6580A, 0x00C70024, 0x00C8001F, 0x00C8A81E, 0x00C94001, 0x00C98020, 0x00CA2827, 0x00CB003F, 0x00CC0100, 0x01370040, 0x02924037, 0x0293F802, 0x02983403, 0x0299BC10, 0x029A7802, 0x029BC008, 0x029C0017, 0x029C8002, 0x029E2402, 0x02A00801, 0x02A01801, 0x02A02C01, 0x02A08C09, 0x02A0D804, 0x02A1D004, 0x02A20002, 0x02A2D012, 0x02A33802, 0x02A38012, 0x02A3E003, 0x02A3F001, 0x02A4980A, 0x02A51C0D, 0x02A57C01, 0x02A60004, 0x02A6CC1B, 0x02A77802, 0x02A79401, 0x02A8A40E, 0x02A90C01, 0x02A93002, 0x02A97004, 0x02A9DC03, 0x02A9EC03, 0x02AAC001, 0x02AAC803, 0x02AADC02, 0x02AAF802, 0x02AB0401, 0x02AB7802, 0x02ABAC07, 0x02ABD402, 0x02AD6C01, 0x02AF8C0B, 0x03600001, 0x036DFC02, 0x036FFC02, 0x037FFC01, 0x03EC7801, 0x03ECA401, 0x03EEC810, 0x03F4F802, 0x03F7F002, 0x03F8001A, 0x03F88033, 0x03F95013, 0x03F9A004, 0x03FBFC01, 0x03FC040F, 0x03FC6807, 0x03FCEC06, 0x03FD6C0B, 0x03FF8007, 0x03FFA007, 0x03FFE405, 0x04040003, 0x0404DC09, 0x0405E411, 0x04063003, 0x0406400C, 0x04068001, 0x0407402E, 0x040B8001, 0x040DD805, 0x040E7C01, 0x040F4001, 0x0415BC01, 0x04215C01, 0x0421DC02, 0x04247C01, 0x0424FC01, 0x04280403, 0x04281402, 0x04283004, 0x0428E003, 0x0428FC01, 0x04294009, 0x0429FC01, 0x042B2001, 0x042B9402, 0x042BC007, 0x042CE407, 0x042E6404, 0x04400003, 0x0440E016, 0x0441FC04, 0x0442C012, 0x04440003, 0x04449C0E, 0x04450004, 0x0445CC03, 0x04460003, 0x0446CC0E, 0x04471409, 0x04476C01, 0x04477403, 0x0448B013, 0x044AA401, 0x044B7C0C, 0x044C0004, 0x044CF001, 0x044CF807, 0x044D1C02, 0x044D2C03, 0x044D5C01, 0x044D8802, 0x044D9807, 0x044DC005, 0x0450D412, 0x04512C05, 0x04516C01, 0x04517401, 0x0452C014, 0x04531801, 0x0456BC07, 0x0456E020, 0x04577002, 0x0458C014, 0x0459800D, 0x045AAC0D, 0x045C740F, 0x045CF004, 0x0470BC08, 0x0470E008, 0x04710405, 0x0471C002, 0x04724816, 0x0472A40E, 0x0491C005, 0x05A9B802, 0x05ABC006, 0x05ACC010, 0x05AD1002, 0x05BD442E, 0x05BE3C04, 0x06F27008, 0x074000F6, 0x07440027, 0x0744A4C0, 0x07480046, 0x074C0057, 0x075B0401, 0x075B6C01, 0x075BEC01, 0x075C5401, 0x075CD401, 0x075D3C01, 0x075DBC01, 0x075E2401, 0x075EA401, 0x075F0C01, 0x0760028C, 0x076A6C05, 0x076A840F, 0x07800007, 0x07802011, 0x07806C07, 0x07808C02, 0x07809805, 0x07A34007, 0x07A51007, 0x07A57802, 0x07BBC002, 0x07C0002C, 0x07C0C064, 0x07C2800F, 0x07C2C40F, 0x07C3040F, 0x07C34425, 0x07C4401F, 0x07C4C03C, 0x07C5C03D, 0x07C7981D, 0x07C8402C, 0x07C90009, 0x07C94002, 0x07CC03D3, 0x07DB800D, 0x07DBC007, 0x07DC0074, 0x07DE0055, 0x07E0000C, 0x07E04038, 0x07E1400A, 0x07E18028, 0x07E2401E, 0x07E4400F, 0x07E48008, 0x07E4C001, 0x07E4CC0C, 0x07E5000C, 0x07E5400F, 0x07E60012, 0x07E70001, 0x38000401, 0x38008060, 0x380400F0, }; static const unsigned int aAscii[4] = { 0xFFFFFFFF, 0xFC00FFFF, 0xF8000001, 0xF8000001, }; if( (unsigned int)c<128 ){ return ( (aAscii[c >> 5] & (1 << (c & 0x001F)))==0 ); }else if( (unsigned int)c<(1<<22) ){ unsigned int key = (((unsigned int)c)<<10) | 0x000003FF; int iRes = 0; int iHi = count(aEntry) - 1; int iLo = 0; while( iHi>=iLo ){ int iTest = (iHi + iLo) / 2; if( key >= aEntry[iTest] ){ iRes = iTest; iLo = iTest+1; }else{ |
︙ | ︙ | |||
194 195 196 197 198 199 200 | 'h', 'i', 'k', 'l', 'l', 'm', 'n', 'p', 'r', 'r', 's', 't', 'u', 'v', 'w', 'w', 'x', 'y', 'z', 'h', 't', 'w', 'y', 'a', 'e', 'i', 'o', 'u', 'y', }; unsigned int key = (((unsigned int)c)<<3) | 0x00000007; int iRes = 0; | | | 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 | 'h', 'i', 'k', 'l', 'l', 'm', 'n', 'p', 'r', 'r', 's', 't', 'u', 'v', 'w', 'w', 'x', 'y', 'z', 'h', 't', 'w', 'y', 'a', 'e', 'i', 'o', 'u', 'y', }; unsigned int key = (((unsigned int)c)<<3) | 0x00000007; int iRes = 0; int iHi = count(aDia) - 1; int iLo = 0; while( iHi>=iLo ){ int iTest = (iHi + iLo) / 2; if( key >= aDia[iTest] ){ iRes = iTest; iLo = iTest+1; }else{ |
︙ | ︙ | |||
256 257 258 259 260 261 262 | ** http://www.unicode.org for details. */ static const struct TableEntry { unsigned short iCode; unsigned char flags; unsigned char nRange; } aEntry[] = { | | | | | | | | | | | | | | | | | | | | | | | | | | | | > > > | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | > | | | | < > | > < | | | | | < > > > > > > | 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 | ** http://www.unicode.org for details. */ static const struct TableEntry { unsigned short iCode; unsigned char flags; unsigned char nRange; } aEntry[] = { {65, 14, 26}, {181, 66, 1}, {192, 14, 23}, {216, 14, 7}, {256, 1, 48}, {306, 1, 6}, {313, 1, 16}, {330, 1, 46}, {376, 150, 1}, {377, 1, 6}, {383, 138, 1}, {385, 52, 1}, {386, 1, 4}, {390, 46, 1}, {391, 0, 1}, {393, 44, 2}, {395, 0, 1}, {398, 34, 1}, {399, 40, 1}, {400, 42, 1}, {401, 0, 1}, {403, 44, 1}, {404, 48, 1}, {406, 54, 1}, {407, 50, 1}, {408, 0, 1}, {412, 54, 1}, {413, 56, 1}, {415, 58, 1}, {416, 1, 6}, {422, 62, 1}, {423, 0, 1}, {425, 62, 1}, {428, 0, 1}, {430, 62, 1}, {431, 0, 1}, {433, 60, 2}, {435, 1, 4}, {439, 64, 1}, {440, 0, 1}, {444, 0, 1}, {452, 2, 1}, {453, 0, 1}, {455, 2, 1}, {456, 0, 1}, {458, 2, 1}, {459, 1, 18}, {478, 1, 18}, {497, 2, 1}, {498, 1, 4}, {502, 156, 1}, {503, 168, 1}, {504, 1, 40}, {544, 144, 1}, {546, 1, 18}, {570, 74, 1}, {571, 0, 1}, {573, 142, 1}, {574, 72, 1}, {577, 0, 1}, {579, 140, 1}, {580, 30, 1}, {581, 32, 1}, {582, 1, 10}, {837, 38, 1}, {880, 1, 4}, {886, 0, 1}, {895, 38, 1}, {902, 20, 1}, {904, 18, 3}, {908, 28, 1}, {910, 26, 2}, {913, 14, 17}, {931, 14, 9}, {962, 0, 1}, {975, 4, 1}, {976, 174, 1}, {977, 176, 1}, {981, 180, 1}, {982, 178, 1}, {984, 1, 24}, {1008, 170, 1}, {1009, 172, 1}, {1012, 164, 1}, {1013, 162, 1}, {1015, 0, 1}, {1017, 186, 1}, {1018, 0, 1}, {1021, 144, 3}, {1024, 36, 16}, {1040, 14, 32}, {1120, 1, 34}, {1162, 1, 54}, {1216, 6, 1}, {1217, 1, 14}, {1232, 1, 96}, {1329, 24, 38}, {4256, 70, 38}, {4295, 70, 1}, {4301, 70, 1}, {5112, 184, 6}, {7296, 122, 1}, {7297, 124, 1}, {7298, 126, 1}, {7299, 130, 2}, {7301, 128, 1}, {7302, 132, 1}, {7303, 134, 1}, {7304, 96, 1}, {7680, 1, 150}, {7835, 166, 1}, {7838, 116, 1}, {7840, 1, 96}, {7944, 184, 8}, {7960, 184, 6}, {7976, 184, 8}, {7992, 184, 8}, {8008, 184, 6}, {8025, 185, 8}, {8040, 184, 8}, {8072, 184, 8}, {8088, 184, 8}, {8104, 184, 8}, {8120, 184, 2}, {8122, 160, 2}, {8124, 182, 1}, {8126, 120, 1}, {8136, 158, 4}, {8140, 182, 1}, {8152, 184, 2}, {8154, 154, 2}, {8168, 184, 2}, {8170, 152, 2}, {8172, 186, 1}, {8184, 146, 2}, {8186, 148, 2}, {8188, 182, 1}, {8486, 118, 1}, {8490, 112, 1}, {8491, 114, 1}, {8498, 12, 1}, {8544, 8, 16}, {8579, 0, 1}, {9398, 10, 26}, {11264, 24, 47}, {11360, 0, 1}, {11362, 108, 1}, {11363, 136, 1}, {11364, 110, 1}, {11367, 1, 6}, {11373, 104, 1}, {11374, 106, 1}, {11375, 100, 1}, {11376, 102, 1}, {11378, 0, 1}, {11381, 0, 1}, {11390, 98, 2}, {11392, 1, 100}, {11499, 1, 4}, {11506, 0, 1}, {42560, 1, 46}, {42624, 1, 28}, {42786, 1, 14}, {42802, 1, 62}, {42873, 1, 4}, {42877, 94, 1}, {42878, 1, 10}, {42891, 0, 1}, {42893, 86, 1}, {42896, 1, 4}, {42902, 1, 20}, {42922, 80, 1}, {42923, 76, 1}, {42924, 78, 1}, {42925, 82, 1}, {42926, 80, 1}, {42928, 90, 1}, {42929, 84, 1}, {42930, 88, 1}, {42931, 68, 1}, {42932, 1, 4}, {43888, 92, 80}, {65313, 14, 26}, }; static const unsigned short aiOff[] = { 1, 2, 8, 15, 16, 26, 28, 32, 34, 37, 38, 40, 48, 63, 64, 69, 71, 79, 80, 116, 202, 203, 205, 206, 207, 209, 210, 211, 213, 214, 217, 218, 219, 775, 928, 7264, 10792, 10795, 23217, 23221, 23228, 23231, 23254, 23256, 23275, 23278, 26672, 30204, 35267, 54721, 54753, 54754, 54756, 54787, 54793, 54809, 57153, 57274, 57921, 58019, 58363, 59314, 59315, 59324, 59325, 59326, 59332, 59356, 61722, 65268, 65341, 65373, 65406, 65408, 65410, 65415, 65424, 65436, 65439, 65450, 65462, 65472, 65476, 65478, 65480, 65482, 65488, 65506, 65511, 65514, 65521, 65527, 65528, 65529, }; int ret = c; assert( sizeof(unsigned short)==2 && sizeof(unsigned char)==1 ); if( c<128 ){ if( c>='A' && c<='Z' ) ret = c + ('a' - 'A'); }else if( c<65536 ){ const struct TableEntry *p; int iHi = count(aEntry) - 1; int iLo = 0; int iRes = -1; assert( c>aEntry[0].iCode ); while( iHi>=iLo ){ int iTest = (iHi + iLo) / 2; int cmp = (c - aEntry[iTest].iCode); if( cmp>=0 ){ iRes = iTest; iLo = iTest+1; }else{ iHi = iTest-1; } } assert( iRes>=0 && c>=aEntry[iRes].iCode ); p = &aEntry[iRes]; if( c<(p->iCode + p->nRange) && 0==(0x01 & p->flags & (p->iCode ^ c)) ){ ret = (c + (aiOff[p->flags>>1])) & 0x0000FFFF; assert( ret>0 ); } if( bRemoveDiacritic ) ret = unicode_remove_diacritic(ret); } else if( c>=66560 && c<66600 ){ ret = c + 40; } else if( c>=66736 && c<66772 ){ ret = c + 40; } else if( c>=68736 && c<68787 ){ ret = c + 64; } else if( c>=71840 && c<71872 ){ ret = c + 32; } else if( c>=125184 && c<125218 ){ ret = c + 34; } return ret; } |
Added src/unversioned.c.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 | /* ** Copyright (c) 2016 D. Richard Hipp ** ** This program is free software; you can redistribute it and/or ** modify it under the terms of the Simplified BSD License (also ** known as the "2-Clause License" or "FreeBSD License".) ** This program is distributed in the hope that it will be useful, ** but without any warranty; without even the implied warranty of ** merchantability or fitness for a particular purpose. ** ** Author contact information: ** drh@hwaci.com ** http://www.hwaci.com/drh/ ** ******************************************************************************* ** ** This file contains code used to implement unversioned file interfaces. */ #include "config.h" #include <assert.h> #if defined(FOSSIL_ENABLE_MINIZ) # define MINIZ_HEADER_FILE_ONLY # include "miniz.c" #else # include <zlib.h> #endif #include "unversioned.h" #include <time.h> /* ** SQL code to implement the tables needed by the unversioned. */ static const char zUnversionedInit[] = @ CREATE TABLE IF NOT EXISTS repository.unversioned( @ uvid INTEGER PRIMARY KEY AUTOINCREMENT, -- unique ID for this file @ name TEXT UNIQUE, -- Name of the uv file @ rcvid INTEGER, -- Where received from @ mtime DATETIME, -- timestamp. Seconds since 1970. @ hash TEXT, -- Content hash. NULL if a delete marker @ sz INTEGER, -- size of content after decompression @ encoding INT, -- 0: plaintext. 1: zlib compressed @ content BLOB -- content of the file. NULL if oversized @ ); ; /* ** Make sure the unversioned table exists in the repository. */ void unversioned_schema(void){ if( !db_table_exists("repository", "unversioned") ){ db_multi_exec(zUnversionedInit/*works-like:""*/); } } /* ** Return a string which is the hash of the unversioned content. ** This is the hash used by repositories to compare content before ** exchanging a catalog. So all repositories must compute this hash ** in exactly the same way. ** ** If debugFlag is set, force the value to be recomputed and write ** the text of the hashed string to stdout. */ const char *unversioned_content_hash(int debugFlag){ const char *zHash = debugFlag ? 0 : db_get("uv-hash", 0); if( zHash==0 ){ Stmt q; db_prepare(&q, "SELECT printf('%%s %%s %%s\n',name,datetime(mtime,'unixepoch'),hash)" " FROM unversioned" " WHERE hash IS NOT NULL" " ORDER BY name" ); while( db_step(&q)==SQLITE_ROW ){ const char *z = db_column_text(&q, 0); if( debugFlag ) fossil_print("%s", z); sha1sum_step_text(z,-1); } db_finalize(&q); db_set("uv-hash", sha1sum_finish(0), 0); zHash = db_get("uv-hash",0); } return zHash; } /* ** Initialize pContent to be the content of an unversioned file zName. ** ** Return 0 on success. Return 1 if zName is not found. */ int unversioned_content(const char *zName, Blob *pContent){ Stmt q; int rc = 1; blob_init(pContent, 0, 0); db_prepare(&q, "SELECT encoding, content FROM unversioned WHERE name=%Q", zName); if( db_step(&q)==SQLITE_ROW ){ db_column_blob(&q, 1, pContent); if( db_column_int(&q, 0)==1 ){ blob_uncompress(pContent, pContent); } rc = 0; } db_finalize(&q); return rc; } /* ** Write unversioned content into the database. */ static void unversioned_write( const char *zUVFile, /* Name of the unversioned file */ Blob *pContent, /* File content */ sqlite3_int64 mtime /* Modification time */ ){ Stmt ins; Blob compressed; Blob hash; db_prepare(&ins, "REPLACE INTO unversioned(name,rcvid,mtime,hash,sz,encoding,content)" " VALUES(:name,:rcvid,:mtime,:hash,:sz,:encoding,:content)" ); sha1sum_blob(pContent, &hash); blob_compress(pContent, &compressed); db_bind_text(&ins, ":name", zUVFile); db_bind_int(&ins, ":rcvid", g.rcvid); db_bind_int64(&ins, ":mtime", mtime); db_bind_text(&ins, ":hash", blob_str(&hash)); db_bind_int(&ins, ":sz", blob_size(pContent)); if( blob_size(&compressed) <= 0.8*blob_size(pContent) ){ db_bind_int(&ins, ":encoding", 1); db_bind_blob(&ins, ":content", &compressed); }else{ db_bind_int(&ins, ":encoding", 0); db_bind_blob(&ins, ":content", pContent); } db_step(&ins); blob_reset(&compressed); blob_reset(&hash); db_finalize(&ins); db_unset("uv-hash", 0); } /* ** Check the status of unversioned file zName. "mtime" and "zHash" are the ** time of last change and SHA1 hash of a copy of this file on a remote ** server. Return an integer status code as follows: ** ** 0: zName does not exist in the unversioned table. ** 1: zName exists and should be replaced by the mtime/zHash remote. ** 2: zName exists and is the same as zHash but has a older mtime ** 3: zName exists and is identical to mtime/zHash in all respects. ** 4: zName exists and is the same as zHash but has a newer mtime. ** 5: zName exists and should override the mtime/zHash remote. */ int unversioned_status(const char *zName, sqlite3_int64 mtime, const char *zHash){ int iStatus = 0; Stmt q; db_prepare(&q, "SELECT mtime, hash FROM unversioned WHERE name=%Q", zName); if( db_step(&q)==SQLITE_ROW ){ const char *zLocalHash = db_column_text(&q, 1); int hashCmp; sqlite3_int64 iLocalMtime = db_column_int64(&q, 0); int mtimeCmp = iLocalMtime<mtime ? -1 : (iLocalMtime==mtime ? 0 : +1); if( zLocalHash==0 ) zLocalHash = "-"; hashCmp = strcmp(zLocalHash, zHash); if( hashCmp==0 ){ iStatus = 3 + mtimeCmp; }else if( mtimeCmp<0 || (mtimeCmp==0 && hashCmp<0) ){ iStatus = 1; }else{ iStatus = 5; } } db_finalize(&q); return iStatus; } /* ** Extract command-line options for the "revert" and "sync" subcommands */ static int unversioned_sync_flags(unsigned syncFlags){ if( find_option("verbose","v",0)!=0 ){ syncFlags |= SYNC_UV_TRACE | SYNC_VERBOSE; } if( find_option("dryrun","n",0)!=0 ){ syncFlags |= SYNC_UV_DRYRUN | SYNC_UV_TRACE | SYNC_VERBOSE; } return syncFlags; } /* ** Return true if the zName contains any whitespace */ static int contains_whitespace(const char *zName){ while( zName[0] ){ if( fossil_isspace(zName[0]) ) return 1; zName++; } return 0; } /* ** COMMAND: uv* ** COMMAND: unversioned ** ** Usage: %fossil unversioned SUBCOMMAND ARGS... ** or: %fossil uv SUBCOMMAND ARGS.. ** ** Unversioned files (UV-files) are artifacts that are synced and are available ** for download but which do not preserve history. Only the most recent version ** of each UV-file is retained. Changes to an UV-file are permanent and cannot ** be undone, so use appropriate caution with this command. ** ** Subcommands: ** ** add FILE ... Add or update an unversioned files in the local ** repository so that it matches FILE on disk. ** Use "--as UVFILE" to give the file a different name ** in the repository than what it called on disk. ** Changes are not pushed to other repositories until ** the next sync. ** ** cat FILE ... Concatenate the content of FILEs to stdout. ** ** edit FILE Bring up FILE in a text editor for modification. ** ** export FILE OUTPUT Write the content of FILE into OUTPUT on disk ** ** list | ls Show all unversioned files held in the local ** repository. ** ** revert ?URL? Restore the state of all unversioned files in the ** local repository to match the remote repository ** URL. ** ** Options: ** -v|--verbose Extra diagnostic output ** -n|--dryrun Show what would have happened ** ** remove | rm FILE ... Remove unversioned files from the local repository. ** Changes are not pushed to other repositories until ** the next sync. ** ** sync ?URL? Synchronize the state of all unversioned files with ** the remote repository URL. The most recent version ** of each file is propagate to all repositories and ** all prior versions are permanently forgotten. ** ** Options: ** -v|--verbose Extra diagnostic output ** -n|--dryrun Show what would have happened ** ** touch FILE ... Update the TIMESTAMP on all of the listed files ** ** Options: ** ** --mtime TIMESTAMP Use TIMESTAMP instead of "now" for the "add", ** "edit", "remove", and "touch" subcommands. */ void unversioned_cmd(void){ const char *zCmd; int nCmd; const char *zMtime = find_option("mtime", 0, 1); sqlite3_int64 mtime; db_find_and_open_repository(0, 0); unversioned_schema(); zCmd = g.argc>=3 ? g.argv[2] : "x"; nCmd = (int)strlen(zCmd); if( zMtime==0 ){ mtime = time(0); }else{ mtime = db_int(0, "SELECT strftime('%%s',%Q)", zMtime); if( mtime<=0 ) fossil_fatal("bad timestamp: %Q", zMtime); } if( memcmp(zCmd, "add", nCmd)==0 ){ const char *zIn; const char *zAs; Blob file; int i; zAs = find_option("as",0,1); if( zAs && g.argc!=4 ) usage("add DISKFILE --as UVFILE"); verify_all_options(); db_begin_transaction(); content_rcvid_init("#!fossil unversioned add"); for(i=3; i<g.argc; i++){ zIn = zAs ? zAs : g.argv[i]; if( zIn[0]==0 || zIn[0]=='/' || !file_is_simple_pathname(zIn,1) ){ fossil_fatal("'%Q' is not an acceptable filename", zIn); } if( contains_whitespace(zIn) ){ fossil_fatal("names of unversioned files may not contain whitespace"); } blob_init(&file,0,0); blob_read_from_file(&file, g.argv[i]); unversioned_write(zIn, &file, mtime); blob_reset(&file); } db_end_transaction(0); }else if( memcmp(zCmd, "cat", nCmd)==0 ){ int i; verify_all_options(); db_begin_transaction(); for(i=3; i<g.argc; i++){ Blob content; if( unversioned_content(g.argv[i], &content)==0 ){ blob_write_to_file(&content, "-"); } blob_reset(&content); } db_end_transaction(0); }else if( memcmp(zCmd, "edit", nCmd)==0 ){ const char *zEditor; /* Name of the text-editor command */ const char *zTFile; /* Temporary file */ const char *zUVFile; /* Name of the unversioned file */ char *zCmd; /* Command to run the text editor */ Blob content; /* Content of the unversioned file */ verify_all_options(); if( g.argc!=4) usage("edit UVFILE"); zUVFile = g.argv[3]; zEditor = fossil_text_editor(); if( zEditor==0 ) fossil_fatal("no text editor - set the VISUAL env variable"); zTFile = fossil_temp_filename(); if( zTFile==0 ) fossil_fatal("cannot find a temporary filename"); db_begin_transaction(); content_rcvid_init("#!fossil unversioned edit"); if( unversioned_content(zUVFile, &content) ){ fossil_fatal("no such uv-file: %Q", zUVFile); } if( looks_like_binary(&content) ){ fossil_fatal("cannot edit binary content"); } #if defined(_WIN32) || defined(__CYGWIN__) blob_add_cr(&content); #endif blob_write_to_file(&content, zTFile); zCmd = mprintf("%s \"%s\"", zEditor, zTFile); if( fossil_system(zCmd) ){ fossil_fatal("editor aborted: %Q", zCmd); } fossil_free(zCmd); blob_reset(&content); blob_read_from_file(&content, zTFile); #if defined(_WIN32) || defined(__CYGWIN__) blob_to_lf_only(&content); #endif file_delete(zTFile); if( zMtime==0 ) mtime = time(0); unversioned_write(zUVFile, &content, mtime); db_end_transaction(0); blob_reset(&content); }else if( memcmp(zCmd, "export", nCmd)==0 ){ Blob content; verify_all_options(); if( g.argc!=5 ) usage("export UVFILE OUTPUT"); if( unversioned_content(g.argv[3], &content) ){ fossil_fatal("no such uv-file: %Q", g.argv[3]); } blob_write_to_file(&content, g.argv[4]); blob_reset(&content); }else if( memcmp(zCmd, "hash", nCmd)==0 ){ /* undocumented */ /* Show the hash value used during uv sync */ int debugFlag = find_option("debug",0,0)!=0; fossil_print("%s\n", unversioned_content_hash(debugFlag)); }else if( memcmp(zCmd, "list", nCmd)==0 || memcmp(zCmd, "ls", nCmd)==0 ){ Stmt q; int allFlag = find_option("all","a",0)!=0; int longFlag = find_option("l",0,0)!=0 || (nCmd>1 && zCmd[1]=='i'); verify_all_options(); if( !longFlag ){ if( allFlag ){ db_prepare(&q, "SELECT name FROM unversioned ORDER BY name"); }else{ db_prepare(&q, "SELECT name FROM unversioned WHERE hash IS NOT NULL" " ORDER BY name"); } while( db_step(&q)==SQLITE_ROW ){ fossil_print("%s\n", db_column_text(&q,0)); } }else{ db_prepare(&q, "SELECT hash, datetime(mtime,'unixepoch'), sz, length(content), name" " FROM unversioned" " ORDER BY name;" ); while( db_step(&q)==SQLITE_ROW ){ const char *zHash = db_column_text(&q, 0); const char *zNoContent = ""; if( zHash==0 ){ if( !allFlag ) continue; zHash = "(deleted)"; }else if( db_column_type(&q,3)==SQLITE_NULL ){ zNoContent = " (no content)"; } fossil_print("%12.12s %s %8d %8d %s%s\n", zHash, db_column_text(&q,1), db_column_int(&q,2), db_column_int(&q,3), db_column_text(&q,4), zNoContent ); } } db_finalize(&q); }else if( memcmp(zCmd, "revert", nCmd)==0 ){ unsigned syncFlags = unversioned_sync_flags(SYNC_UNVERSIONED|SYNC_UV_REVERT); g.argv[1] = "sync"; g.argv[2] = "--uv-noop"; sync_unversioned(syncFlags); }else if( memcmp(zCmd, "remove", nCmd)==0 || memcmp(zCmd, "rm", nCmd)==0 ){ int i; verify_all_options(); db_begin_transaction(); for(i=3; i<g.argc; i++){ db_multi_exec( "UPDATE unversioned" " SET hash=NULL, content=NULL, mtime=%lld, sz=0 WHERE name=%Q", mtime, g.argv[i] ); } db_unset("uv-hash", 0); db_end_transaction(0); }else if( memcmp(zCmd,"sync",nCmd)==0 ){ unsigned syncFlags = unversioned_sync_flags(SYNC_UNVERSIONED); g.argv[1] = "sync"; g.argv[2] = "--uv-noop"; sync_unversioned(syncFlags); }else if( memcmp(zCmd, "touch", nCmd)==0 ){ int i; verify_all_options(); db_begin_transaction(); for(i=3; i<g.argc; i++){ db_multi_exec( "UPDATE unversioned SET mtime=%lld WHERE name=%Q", mtime, g.argv[i] ); } db_unset("uv-hash", 0); db_end_transaction(0); }else{ usage("add|cat|edit|export|list|revert|remove|sync|touch"); } } /* ** WEBPAGE: uvlist ** ** Display a list of all unversioned files in the repository. ** Query parameters: ** ** byage=1 Order the initial display be decreasing age ** showdel=0 Show deleted files */ void uvstat_page(void){ Stmt q; sqlite3_int64 iNow; sqlite3_int64 iTotalSz = 0; int cnt = 0; int n = 0; const char *zOrderBy = "name"; int showDel = 0; char zSzName[100]; login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } style_header("Unversioned Files"); if( !db_table_exists("repository","unversioned") ){ @ No unversioned files on this server style_footer(); return; } if( PB("byage") ) zOrderBy = "mtime DESC"; if( PB("showdel") ) showDel = 1; db_prepare(&q, "SELECT" " name," " mtime," " hash," " sz," " (SELECT login FROM rcvfrom, user" " WHERE user.uid=rcvfrom.uid AND rcvfrom.rcvid=unversioned.rcvid)," " rcvid" " FROM unversioned %s ORDER BY %s", showDel ? "" : "WHERE hash IS NOT NULL" /*safe-for-%s*/, zOrderBy/*safe-for-%s*/ ); iNow = db_int64(0, "SELECT strftime('%%s','now');"); while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); sqlite3_int64 mtime = db_column_int(&q, 1); const char *zHash = db_column_text(&q, 2); int isDeleted = zHash==0; int fullSize = db_column_int(&q, 3); char *zAge = human_readable_age((iNow - mtime)/86400.0); const char *zLogin = db_column_text(&q, 4); int rcvid = db_column_int(&q,5); if( zLogin==0 ) zLogin = ""; if( (n++)==0 ){ @ <div class="uvlist"> @ <table cellpadding="2" cellspacing="0" border="1" id="uvtab"> @ <thead><tr> @ <th> Name @ <th> Age @ <th> Size @ <th> User @ <th> SHA1 if( g.perm.Admin ){ @ <th> rcvid } @ </tr></thead> @ <tbody> } @ <tr> if( isDeleted ){ sqlite3_snprintf(sizeof(zSzName), zSzName, "<i>Deleted</i>"); zHash = ""; fullSize = 0; @ <td> %h(zName) </td> }else{ approxSizeName(sizeof(zSzName), zSzName, fullSize); iTotalSz += fullSize; cnt++; @ <td> <a href='%R/uv/%T(zName)'>%h(zName)</a> </td> } @ <td data-sortkey='%016llx(-mtime)'> %s(zAge) </td> @ <td data-sortkey='%08x(fullSize)'> %s(zSzName) </td> @ <td> %h(zLogin) </td> @ <td> %h(zHash) </td> if( g.perm.Admin ){ if( rcvid ){ @ <td> <a href="%R/rcvfrom?rcvid=%d(rcvid)">%d(rcvid)</a> }else{ @ <td> } } @ </tr> fossil_free(zAge); } db_finalize(&q); if( n ){ approxSizeName(sizeof(zSzName), zSzName, iTotalSz); @ </tbody> @ <tfoot><tr><td><b>Total over %d(cnt) files</b><td><td>%s(zSzName) @ <td><td></tfoot> @ </table></div> output_table_sorting_javascript("uvtab","tkKttN",1); }else{ @ No unversioned files on this server. } style_footer(); } |
Changes to src/update.c.
︙ | ︙ | |||
151 152 153 154 155 156 157 | verify_all_options(); db_must_be_within_tree(); vid = db_lget_int("checkout", 0); user_select(); if( !dryRunFlag && !internalUpdate ){ if( autosync_loop(SYNC_PULL + SYNC_VERBOSE*verboseFlag, | | | | 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 | verify_all_options(); db_must_be_within_tree(); vid = db_lget_int("checkout", 0); user_select(); if( !dryRunFlag && !internalUpdate ){ if( autosync_loop(SYNC_PULL + SYNC_VERBOSE*verboseFlag, db_get_int("autosync-tries", 1), 1) ){ fossil_fatal("update abandoned due to sync failure"); } } /* Create any empty directories now, as well as after the update, ** so changes in settings are reflected now */ if( !dryRunFlag ) ensure_empty_dirs_created(); |
︙ | ︙ |
Changes to src/user.c.
︙ | ︙ | |||
17 18 19 20 21 22 23 | ** ** Commands and procedures used for creating, processing, editing, and ** querying information about users. */ #include "config.h" #include "user.h" | < | > | > > > | > > > > > > > > > > > | | | | | | | | > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > > | 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 | ** ** Commands and procedures used for creating, processing, editing, and ** querying information about users. */ #include "config.h" #include "user.h" /* ** Strip leading and trailing space from a string and add the string ** onto the end of a blob. */ static void strip_string(Blob *pBlob, char *z){ int i; blob_reset(pBlob); while( fossil_isspace(*z) ){ z++; } for(i=0; z[i]; i++){ if( z[i]=='\r' || z[i]=='\n' ){ while( i>0 && fossil_isspace(z[i-1]) ){ i--; } z[i] = 0; break; } if( z[i]>0 && z[i]<' ' ) z[i] = ' '; } blob_append(pBlob, z, -1); } #if defined(_WIN32) || defined(__BIONIC__) #ifdef _WIN32 #include <conio.h> #endif /* ** getpass() for Windows and Android. */ static char *zPwdBuffer = 0; static size_t nPwdBuffer = 0; static char *getpass(const char *prompt){ char *zPwd; size_t nPwd; size_t i; if( zPwdBuffer==0 ){ zPwdBuffer = fossil_secure_alloc_page(&nPwdBuffer); assert( zPwdBuffer ); }else{ fossil_secure_zero(zPwdBuffer, nPwdBuffer); } zPwd = zPwdBuffer; nPwd = nPwdBuffer; fputs(prompt,stderr); fflush(stderr); assert( zPwd!=0 ); assert( nPwd>0 ); for(i=0; i<nPwd-1; ++i){ #if defined(_WIN32) zPwd[i] = _getch(); #else zPwd[i] = getc(stdin); #endif if(zPwd[i]=='\r' || zPwd[i]=='\n'){ break; } /* BS or DEL */ else if(i>0 && (zPwd[i]==8 || zPwd[i]==127)){ i -= 2; continue; } /* CTRL-C */ else if(zPwd[i]==3) { i=0; break; } /* ESC */ else if(zPwd[i]==27){ i=0; break; } else{ fputc('*',stderr); } } zPwd[i]='\0'; fputs("\n", stderr); assert( zPwd==zPwdBuffer ); return zPwd; } void freepass(){ if( !zPwdBuffer ) return; assert( nPwdBuffer>0 ); fossil_secure_free_page(zPwdBuffer, nPwdBuffer); } #endif #if defined(_WIN32) || defined(WIN32) # include <io.h> # include <fcntl.h> # undef popen # define popen _popen # undef pclose # define pclose _pclose #endif /* ** Scramble substitution matrix: */ static char aSubst[256]; /* ** Descramble the password */ static void userDescramble(char *z){ int i; for(i=0; z[i]; i++) z[i] = aSubst[(unsigned char)z[i]]; } /* Print a string in 5-letter groups */ static void printFive(const unsigned char *z){ int i; for(i=0; z[i]; i++){ if( i>0 && (i%5)==0 ) putchar(' '); putchar(z[i]); } putchar('\n'); } /* Return a pseudo-random integer between 0 and N-1 */ static int randint(int N){ unsigned char x; assert( N<256 ); sqlite3_randomness(1, &x); return x % N; } /* ** Generate and print a random scrambling of letters a through z (omitting x) ** and set up the aSubst[] matrix to descramble. */ static void userGenerateScrambleCode(void){ unsigned char zOrig[30]; unsigned char zA[30]; unsigned char zB[30]; int nA = 25; int nB = 0; int i; memcpy(zOrig, "abcdefghijklmnopqrstuvwyz", nA+1); memcpy(zA, zOrig, nA+1); assert( nA==(int)strlen((char*)zA) ); for(i=0; i<sizeof(aSubst); i++) aSubst[i] = i; printFive(zA); while( nA>0 ){ int x = randint(nA); zB[nB++] = zA[x]; zA[x] = zA[--nA]; } assert( nB==25 ); zB[nB] = 0; printFive(zB); for(i=0; i<nB; i++) aSubst[zB[i]] = zOrig[i]; } /* ** Return the value of the FOSSIL_SECURITY_LEVEL environment variable. ** Or return 0 if that variable does not exist. */ int fossil_security_level(void){ const char *zLevel = fossil_getenv("FOSSIL_SECURITY_LEVEL"); if( zLevel==0 ) return 0; return atoi(zLevel); } /* ** Do a single prompt for a passphrase. Store the results in the blob. ** ** ** The return value is a pointer to a static buffer that is overwritten ** on subsequent calls to this same routine. */ static void prompt_for_passphrase(const char *zPrompt, Blob *pPassphrase){ char *z; #if 0 */ ** If the FOSSIL_PWREADER environment variable is set, then it will ** be the name of a program that prompts the user for their password/ ** passphrase in a secure manner. The program should take one or more ** arguments which are the prompts and should output the acquired ** passphrase as a single line on stdout. This function will read the ** output using popen(). ** ** If FOSSIL_PWREADER is not set, or if it is not the name of an ** executable, then use the C-library getpass() routine. */ const char *zProg = fossil_getenv("FOSSIL_PWREADER"); if( zProg && zProg[0] ){ static char zPass[100]; Blob cmd; FILE *in; blob_zero(&cmd); blob_appendf(&cmd, "%s \"Fossil Passphrase\" \"%s\"", zProg, zPrompt); zPass[0] = 0; in = popen(blob_str(&cmd), "r"); fgets(zPass, sizeof(zPass), in); pclose(in); blob_reset(&cmd); z = zPass; }else #endif if( fossil_security_level()>=2 ){ userGenerateScrambleCode(); z = getpass(zPrompt); if( z ) userDescramble(z); printf("\033[3A\033[J"); /* Erase previous three lines */ fflush(stdout); }else{ z = getpass(zPrompt); } strip_string(pPassphrase, z); } /* ** Prompt the user for a password. Store the result in the pPassphrase ** blob. ** |
︙ | ︙ | |||
137 138 139 140 141 142 143 144 145 146 147 148 149 150 | int save_password_prompt(const char *passwd){ Blob x; char c; const char *old = db_get("last-sync-pw", 0); if( (old!=0) && fossil_strcmp(unobscure(old), passwd)==0 ){ return 0; } prompt_user("remember password (Y/n)? ", &x); c = blob_str(&x)[0]; blob_reset(&x); return ( c!='n' && c!='N' ); } /* | > | 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 | int save_password_prompt(const char *passwd){ Blob x; char c; const char *old = db_get("last-sync-pw", 0); if( (old!=0) && fossil_strcmp(unobscure(old), passwd)==0 ){ return 0; } if( fossil_security_level()>=1 ) return 0; prompt_user("remember password (Y/n)? ", &x); c = blob_str(&x)[0]; blob_reset(&x); return ( c!='n' && c!='N' ); } /* |
︙ | ︙ | |||
385 386 387 388 389 390 391 | "or setting a default user with \"fossil user default USER\".\n" ); fossil_fatal("cannot determine user"); } /* ** COMMAND: test-usernames | | | | | 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 | "or setting a default user with \"fossil user default USER\".\n" ); fossil_fatal("cannot determine user"); } /* ** COMMAND: test-usernames ** ** Usage: %fossil test-usernames ** ** Print details about sources of fossil usernames. */ void test_usernames_cmd(void){ db_find_and_open_repository(0, 0); fossil_print("Initial g.zLogin: %s\n", g.zLogin); fossil_print("Initial g.userUid: %d\n", g.userUid); fossil_print("checkout default-user: %s\n", g.localOpen ? db_lget("default-user","") : "<<no open checkout>>"); fossil_print("default-user: %s\n", db_get("default-user","")); fossil_print("FOSSIL_USER: %s\n", fossil_getenv("FOSSIL_USER")); fossil_print("USER: %s\n", fossil_getenv("USER")); |
︙ | ︙ | |||
429 430 431 432 433 434 435 436 437 438 439 440 441 442 | sqlite3_create_function(g.db, "shared_secret", 2, SQLITE_UTF8, 0, sha1_shared_secret_sql_function, 0, 0); db_multi_exec( "UPDATE user SET pw=shared_secret(pw,login), mtime=now()" " WHERE length(pw)>0 AND length(pw)!=40" ); } /* ** WEBPAGE: access_log ** ** Show login attempts, including timestamp and IP address. ** Requires Admin privileges. ** | > > > > > > > > > > > > > > > | 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 | sqlite3_create_function(g.db, "shared_secret", 2, SQLITE_UTF8, 0, sha1_shared_secret_sql_function, 0, 0); db_multi_exec( "UPDATE user SET pw=shared_secret(pw,login), mtime=now()" " WHERE length(pw)>0 AND length(pw)!=40" ); } /* ** COMMAND: test-prompt-user ** ** Usage: %fossil test-prompt-user PROMPT ** ** Prompts the user for input and then prints it verbatim (i.e. without ** a trailing line terminator). */ void test_prompt_user_cmd(void){ Blob answer; if( g.argc!=3 ) usage("PROMPT"); prompt_user(g.argv[2], &answer); fossil_print("%s", blob_str(&answer)); } /* ** WEBPAGE: access_log ** ** Show login attempts, including timestamp and IP address. ** Requires Admin privileges. ** |
︙ | ︙ | |||
492 493 494 495 496 497 498 | if( y==1 ){ blob_append(&sql, " WHERE success", -1); }else if( y==2 ){ blob_append(&sql, " WHERE NOT success", -1); } blob_append_sql(&sql," ORDER BY rowid DESC LIMIT %d OFFSET %d", n+1, skip); if( skip ){ | | | < < | | | | | < | | | 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 | if( y==1 ){ blob_append(&sql, " WHERE success", -1); }else if( y==2 ){ blob_append(&sql, " WHERE NOT success", -1); } blob_append_sql(&sql," ORDER BY rowid DESC LIMIT %d OFFSET %d", n+1, skip); if( skip ){ style_submenu_element("Newer", "%s/access_log?o=%d&n=%d&y=%d", g.zTop, skip>=n ? skip-n : 0, n, y); } rc = db_prepare_ignore_error(&q, "%s", blob_sql_text(&sql)); fLogEnabled = db_get_boolean("access-log", 0); @ <div align="center">Access logging is %s(fLogEnabled?"on":"off"). @ (Change this on the <a href="setup_settings">settings</a> page.)</div> @ <table border="1" cellpadding="5" id="logtable" align="center"> @ <thead><tr><th width="33%%">Date</th><th width="34%%">User</th> @ <th width="33%%">IP Address</th></tr></thead><tbody> while( rc==SQLITE_OK && db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); const char *zIP = db_column_text(&q, 1); const char *zDate = db_column_text(&q, 2); int bSuccess = db_column_int(&q, 3); cnt++; if( cnt>n ){ style_submenu_element("Older", "%s/access_log?o=%d&n=%d&y=%d", g.zTop, skip+n, n, y); break; } if( bSuccess ){ @ <tr> }else{ @ <tr bgcolor="#ffacc0"> } @ <td>%s(zDate)</td><td>%h(zName)</td><td>%h(zIP)</td></tr> } if( skip>0 || cnt>n ){ style_submenu_element("All", "%s/access_log?n=10000000", g.zTop); } @ </tbody></table> db_finalize(&q); @ <hr /> @ <form method="post" action="%s(g.zTop)/access_log"> @ <label><input type="checkbox" name="delold"> @ Delete all but the most recent 200 entries</input></label> @ <input type="submit" name="deloldbtn" value="Delete"></input> @ </form> @ <form method="post" action="%s(g.zTop)/access_log"> @ <label><input type="checkbox" name="delanon"> |
︙ | ︙ |
Changes to src/utf8.c.
︙ | ︙ | |||
315 316 317 318 319 320 321 | #ifdef _WIN32 int nChar, written = 0; wchar_t *zUnicode; /* Unicode version of zUtf8 */ DWORD dummy; Blob blob; static int istty[2] = { -1, -1 }; | > | | 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 | #ifdef _WIN32 int nChar, written = 0; wchar_t *zUnicode; /* Unicode version of zUtf8 */ DWORD dummy; Blob blob; static int istty[2] = { -1, -1 }; assert( toStdErr==0 || toStdErr==1 ); if( istty[toStdErr]==-1 ){ istty[toStdErr] = _isatty(toStdErr + 1) != 0; } if( !istty[toStdErr] ){ /* stdout/stderr is not a console. */ return -1; } |
︙ | ︙ |
Changes to src/util.c.
︙ | ︙ | |||
54 55 56 57 58 59 60 61 62 63 64 65 66 67 | free(p); } void *fossil_realloc(void *p, size_t n){ p = realloc(p, n); if( p==0 ) fossil_panic("out of memory"); return p; } /* ** This function implements a cross-platform "system()" interface. */ int fossil_system(const char *zOrigCmd){ int rc; #if defined(_WIN32) | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 | free(p); } void *fossil_realloc(void *p, size_t n){ p = realloc(p, n); if( p==0 ) fossil_panic("out of memory"); return p; } void fossil_secure_zero(void *p, size_t n){ volatile unsigned char *vp = (volatile unsigned char *)p; size_t i; if( p==0 ) return; assert( n>0 ); if( n==0 ) return; for(i=0; i<n; i++){ vp[i] ^= 0xFF; } for(i=0; i<n; i++){ vp[i] ^= vp[i]; } } void fossil_get_page_size(size_t *piPageSize){ #if defined(_WIN32) SYSTEM_INFO sysInfo; memset(&sysInfo, 0, sizeof(SYSTEM_INFO)); GetSystemInfo(&sysInfo); *piPageSize = (size_t)sysInfo.dwPageSize; #else *piPageSize = 4096; /* FIXME: What for POSIX? */ #endif } void *fossil_secure_alloc_page(size_t *pN){ void *p; size_t pageSize; fossil_get_page_size(&pageSize); assert( pageSize>0 ); assert( pageSize%2==0 ); #if defined(_WIN32) p = VirtualAlloc(NULL, pageSize, MEM_COMMIT|MEM_RESERVE, PAGE_READWRITE); if( p==NULL ){ fossil_fatal("VirtualAlloc failed: %lu\n", GetLastError()); } if( !VirtualLock(p, pageSize) ){ fossil_fatal("VirtualLock failed: %lu\n", GetLastError()); } #else p = fossil_malloc(pageSize); #endif fossil_secure_zero(p, pageSize); if( pN ) *pN = pageSize; return p; } void fossil_secure_free_page(void *p, size_t n){ if( !p ) return; assert( n>0 ); fossil_secure_zero(p, n); #if defined(_WIN32) if( !VirtualUnlock(p, n) ){ fossil_fatal("VirtualUnlock failed: %lu\n", GetLastError()); } if( !VirtualFree(p, 0, MEM_RELEASE) ){ fossil_fatal("VirtualFree failed: %lu\n", GetLastError()); } #else fossil_free(p); #endif } /* ** This function implements a cross-platform "system()" interface. */ int fossil_system(const char *zOrigCmd){ int rc; #if defined(_WIN32) |
︙ | ︙ | |||
339 340 341 342 343 344 345 | ** Return false if the input string contains text. */ int fossil_all_whitespace(const char *z){ if( z==0 ) return 1; while( fossil_isspace(z[0]) ){ z++; } return z[0]==0; } | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 | ** Return false if the input string contains text. */ int fossil_all_whitespace(const char *z){ if( z==0 ) return 1; while( fossil_isspace(z[0]) ){ z++; } return z[0]==0; } /* ** Return the name of the users preferred text editor. Return NULL if ** not found. ** ** Search algorithm: ** (1) The local "editor" setting ** (2) The global "editor" setting ** (3) The VISUAL environment variable ** (4) The EDITOR environment variable ** (5) (Windows only:) "notepad.exe" */ const char *fossil_text_editor(void){ const char *zEditor = db_get("editor", 0); if( zEditor==0 ){ zEditor = fossil_getenv("VISUAL"); } if( zEditor==0 ){ zEditor = fossil_getenv("EDITOR"); } #if defined(_WIN32) || defined(__CYGWIN__) if( zEditor==0 ){ zEditor = mprintf("%s\\notepad.exe", fossil_getenv("SYSTEMROOT")); #if defined(__CYGWIN__) zEditor = fossil_utf8_to_path(zEditor, 0); #endif } #endif return zEditor; } /* ** Construct a temporary filename. ** ** The returned string is obtained from sqlite3_malloc() and must be ** freed by the caller. */ char *fossil_temp_filename(void){ char *zTFile = 0; sqlite3 *db; if( g.db ){ db = g.db; }else{ sqlite3_open("",&db); } sqlite3_file_control(db, 0, SQLITE_FCNTL_TEMPFILENAME, (void*)&zTFile); if( g.db==0 ) sqlite3_close(db); return zTFile; } |
Changes to src/vfile.c.
︙ | ︙ | |||
262 263 264 265 266 267 268 | file_set_mtime(zName, desiredMtime); currentMtime = file_wd_mtime(zName); } } } #ifndef _WIN32 if( chnged==0 || chnged==6 || chnged==7 || chnged==8 || chnged==9 ){ | | | | | | | 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 | file_set_mtime(zName, desiredMtime); currentMtime = file_wd_mtime(zName); } } } #ifndef _WIN32 if( chnged==0 || chnged==6 || chnged==7 || chnged==8 || chnged==9 ){ if( origPerm==currentPerm ){ chnged = 0; }else if( currentPerm==PERM_EXE ){ chnged = 6; }else if( currentPerm==PERM_LNK ){ chnged = 7; }else if( origPerm==PERM_EXE ){ chnged = 8; }else if( origPerm==PERM_LNK ){ chnged = 9; } } #endif if( currentMtime!=oldMtime || chnged!=oldChnged ){ db_multi_exec("UPDATE vfile SET mtime=%lld, chnged=%d WHERE id=%d", currentMtime, chnged, id); |
︙ | ︙ | |||
346 347 348 349 350 351 352 | promptFlag = 0; } else if( cReply!='y' && cReply!='Y' ){ blob_reset(&content); continue; } } if( verbose ) fossil_print("%s\n", &zName[nRepos]); | | | | 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 | promptFlag = 0; } else if( cReply!='y' && cReply!='Y' ){ blob_reset(&content); continue; } } if( verbose ) fossil_print("%s\n", &zName[nRepos]); if( file_wd_isdir(zName)==1 ){ /*TODO(dchest): remove directories? */ fossil_fatal("%s is directory, cannot overwrite", zName); } if( file_wd_size(zName)>=0 && (isLink || file_wd_islink(0)) ){ file_delete(zName); } if( isLink ){ symlink_create(blob_str(&content), zName); }else{ |
︙ | ︙ | |||
433 434 435 436 437 438 439 | if( sqlite3_strglob("ci-comment-????????????.txt", zName)==0 ) return 1; for(; zName[0]!=0; zName++){ if( zName[0]=='/' && sqlite3_strglob("/ci-comment-????????????.txt", zName)==0 ){ return 1; } if( zName[0]!='-' ) continue; | | > > | 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 | if( sqlite3_strglob("ci-comment-????????????.txt", zName)==0 ) return 1; for(; zName[0]!=0; zName++){ if( zName[0]=='/' && sqlite3_strglob("/ci-comment-????????????.txt", zName)==0 ){ return 1; } if( zName[0]!='-' ) continue; for(i=0; i<count(azTemp); i++){ n = (int)strlen(azTemp[i]); if( memcmp(azTemp[i], zName+1, n) ) continue; if( zName[n+1]==0 ) return 1; if( zName[n+1]=='-' ){ for(j=n+2; zName[j] && fossil_isdigit(zName[j]); j++){} if( zName[j]==0 ) return 1; } } } return 0; } #if INTERFACE /* ** Values for the scanFlags parameter to vfile_scan(). */ #define SCAN_ALL 0x001 /* Includes files that begin with "." */ #define SCAN_TEMP 0x002 /* Only Fossil-generated files like *-baseline */ #define SCAN_NESTED 0x004 /* Scan for empty dirs in nested checkouts */ #define SCAN_MTIME 0x008 /* Populate mtime column */ #define SCAN_SIZE 0x010 /* Populate size column */ #endif /* INTERFACE */ /* ** Load into table SFILE the name of every ordinary file in ** the directory pPath. Omit the first nPrefix characters of ** of pPath when inserting into the SFILE table. ** |
︙ | ︙ | |||
496 497 498 499 500 501 502 | if( glob_match(pIgnore2, &blob_str(pPath)[nPrefix+1]) ) skipAll = 1; blob_resize(pPath, origSize); } if( skipAll ) return; if( depth==0 ){ db_prepare(&ins, | | | | > > > > > | 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 | if( glob_match(pIgnore2, &blob_str(pPath)[nPrefix+1]) ) skipAll = 1; blob_resize(pPath, origSize); } if( skipAll ) return; if( depth==0 ){ db_prepare(&ins, "INSERT OR IGNORE INTO sfile(pathname%s%s) SELECT :file%s%s" " WHERE NOT EXISTS(SELECT 1 FROM vfile WHERE" " pathname=:file %s)", scanFlags & SCAN_MTIME ? ", mtime" : "", scanFlags & SCAN_SIZE ? ", size" : "", scanFlags & SCAN_MTIME ? ", :mtime" : "", scanFlags & SCAN_SIZE ? ", :size" : "", filename_collation() ); } depth++; zNative = fossil_utf8_to_path(blob_str(pPath), 1); d = opendir(zNative); if( d ){ |
︙ | ︙ | |||
537 538 539 540 541 542 543 544 545 546 547 548 549 550 | }else if( (pEntry->d_type==DT_UNKNOWN || pEntry->d_type==DT_LNK) ? (file_wd_isfile_or_link(zPath)) : (pEntry->d_type==DT_REG) ){ #else }else if( file_wd_isfile_or_link(zPath) ){ #endif if( (scanFlags & SCAN_TEMP)==0 || is_temporary_file(zUtf8) ){ db_bind_text(&ins, ":file", &zPath[nPrefix+1]); db_step(&ins); db_reset(&ins); } } fossil_path_free(zUtf8); blob_resize(pPath, origSize); } | > > > > > > | 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 | }else if( (pEntry->d_type==DT_UNKNOWN || pEntry->d_type==DT_LNK) ? (file_wd_isfile_or_link(zPath)) : (pEntry->d_type==DT_REG) ){ #else }else if( file_wd_isfile_or_link(zPath) ){ #endif if( (scanFlags & SCAN_TEMP)==0 || is_temporary_file(zUtf8) ){ db_bind_text(&ins, ":file", &zPath[nPrefix+1]); if( scanFlags & SCAN_MTIME ){ db_bind_int(&ins, ":mtime", file_mtime(zPath)); } if( scanFlags & SCAN_SIZE ){ db_bind_int(&ins, ":size", file_size(zPath)); } db_step(&ins); db_reset(&ins); } } fossil_path_free(zUtf8); blob_resize(pPath, origSize); } |
︙ | ︙ | |||
916 917 918 919 920 921 922 | blob_zero(&err); if( pManOut ){ blob_zero(pManOut); } db_must_be_within_tree(); pManifest = manifest_get(vid, CFTYPE_MANIFEST, &err); if( pManifest==0 ){ | | | 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 | blob_zero(&err); if( pManOut ){ blob_zero(pManOut); } db_must_be_within_tree(); pManifest = manifest_get(vid, CFTYPE_MANIFEST, &err); if( pManifest==0 ){ fossil_fatal("manifest file (%d) is malformed:\n%s", vid, blob_str(&err)); } manifest_file_rewind(pManifest); while( (pFile = manifest_file_next(pManifest,0))!=0 ){ if( pFile->zUuid==0 ) continue; fid = uuid_to_rid(pFile->zUuid, 0); md5sum_step_text(pFile->zName, -1); |
︙ | ︙ |
Changes to src/wiki.c.
︙ | ︙ | |||
135 136 137 138 139 140 141 | /* ** Only allow certain mimetypes through. ** All others become "text/x-fossil-wiki" */ const char *wiki_filter_mimetypes(const char *zMimetype){ if( zMimetype!=0 ){ | | | | 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 | /* ** Only allow certain mimetypes through. ** All others become "text/x-fossil-wiki" */ const char *wiki_filter_mimetypes(const char *zMimetype){ if( zMimetype!=0 ){ int i; for(i=0; i<count(azStyles); i+=3){ if( fossil_strcmp(zMimetype,azStyles[i+2])==0 ){ return azStyles[i]; } } if( fossil_strcmp(zMimetype, "text/x-markdown")==0 || fossil_strcmp(zMimetype, "text/plain")==0 ){ return zMimetype; |
︙ | ︙ | |||
181 182 183 184 185 186 187 | ** Show a summary of the Markdown wiki formatting rules. */ void markdown_rules_page(void){ Blob x; int fTxt = P("txt")!=0; style_header("Markdown Formatting Rules"); if( fTxt ){ | | | | 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 | ** Show a summary of the Markdown wiki formatting rules. */ void markdown_rules_page(void){ Blob x; int fTxt = P("txt")!=0; style_header("Markdown Formatting Rules"); if( fTxt ){ style_submenu_element("Formatted", "%R/md_rules"); }else{ style_submenu_element("Plain-Text", "%R/md_rules?txt=1"); } blob_init(&x, builtin_text("markdown.md"), -1); wiki_render_by_mimetype(&x, fTxt ? "text/plain" : "text/x-markdown"); blob_reset(&x); style_footer(); } |
︙ | ︙ | |||
229 230 231 232 233 234 235 | #define W_ALL_BUT(x) (W_ALL&~(x)) /* ** Add some standard submenu elements for wiki screens. */ static void wiki_standard_submenu(unsigned int ok){ if( (ok & W_SRCH)!=0 && search_restrict(SRCH_WIKI)!=0 ){ | | | | | | | 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 | #define W_ALL_BUT(x) (W_ALL&~(x)) /* ** Add some standard submenu elements for wiki screens. */ static void wiki_standard_submenu(unsigned int ok){ if( (ok & W_SRCH)!=0 && search_restrict(SRCH_WIKI)!=0 ){ style_submenu_element("Search", "%R/wikisrch"); } if( (ok & W_LIST)!=0 ){ style_submenu_element("List", "%R/wcontent"); } if( (ok & W_HELP)!=0 ){ style_submenu_element("Help", "%R/wikihelp"); } if( (ok & W_NEW)!=0 && g.anon.NewWiki ){ style_submenu_element("New", "%R/wikinew"); } #if 0 if( (ok & W_BLOG)!=0 #endif if( (ok & W_SANDBOX)!=0 ){ style_submenu_element("Sandbox", "%R/wiki?name=Sandbox"); } } /* ** WEBPAGE: wikihelp ** A generic landing page for wiki. */ |
︙ | ︙ | |||
364 365 366 367 368 369 370 | zBody = pWiki->zWiki; zMimetype = pWiki->zMimetype; } } zMimetype = wiki_filter_mimetypes(zMimetype); if( !g.isHome ){ if( rid ){ | | < | < | < | < < | | < | | 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 | zBody = pWiki->zWiki; zMimetype = pWiki->zMimetype; } } zMimetype = wiki_filter_mimetypes(zMimetype); if( !g.isHome ){ if( rid ){ style_submenu_element("Diff", "%R/wdiff?name=%T&a=%d", zPageName, rid); zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); style_submenu_element("Details", "%R/info/%s", zUuid); } if( (rid && g.anon.WrWiki) || (!rid && g.anon.NewWiki) ){ if( db_get_boolean("wysiwyg-wiki", 0) ){ style_submenu_element("Edit", "%s/wikiedit?name=%T&wysiwyg=1", g.zTop, zPageName); }else{ style_submenu_element("Edit", "%s/wikiedit?name=%T", g.zTop, zPageName); } } if( rid && g.anon.ApndWiki && g.anon.Attach ){ style_submenu_element("Attach", "%s/attachadd?page=%T&from=%s/wiki%%3fname=%T", g.zTop, zPageName, g.zTop, zPageName); } if( rid && g.anon.ApndWiki ){ style_submenu_element("Append", "%s/wikiappend?name=%T&mimetype=%s", g.zTop, zPageName, zMimetype); } if( g.perm.Hyperlink ){ style_submenu_element("History", "%s/whistory?name=%T", g.zTop, zPageName); } } style_set_current_page("%T?name=%T", g.zPath, zPageName); style_header("%s", zPageName); wiki_standard_submenu(submenuFlags); blob_init(&wiki, zBody, -1); |
︙ | ︙ | |||
432 433 434 435 436 437 438 | /* ** Output a selection box from which the user can select the ** wiki mimetype. */ void mimetype_option_menu(const char *zMimetype){ unsigned i; @ <select name="mimetype" size="1"> | | | 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 | /* ** Output a selection box from which the user can select the ** wiki mimetype. */ void mimetype_option_menu(const char *zMimetype){ unsigned i; @ <select name="mimetype" size="1"> for(i=0; i<count(azStyles); i+=3){ if( fossil_strcmp(zMimetype,azStyles[i])==0 ){ @ <option value="%s(azStyles[i])" selected>%s(azStyles[i+1])</option> }else{ @ <option value="%s(azStyles[i])">%s(azStyles[i+1])</option> } } @ </select> |
︙ | ︙ | |||
680 681 682 683 684 685 686 | char *zId; zDate = db_text(0, "SELECT datetime('now')"); zRemark = PD("r",""); zUser = PD("u",g.zLogin); if( fossil_strcmp(zMimetype, "text/x-fossil-wiki")==0 ){ zId = db_text(0, "SELECT lower(hex(randomblob(8)))"); | | | 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 | char *zId; zDate = db_text(0, "SELECT datetime('now')"); zRemark = PD("r",""); zUser = PD("u",g.zLogin); if( fossil_strcmp(zMimetype, "text/x-fossil-wiki")==0 ){ zId = db_text(0, "SELECT lower(hex(randomblob(8)))"); blob_appendf(p, "\n\n<hr /><div id=\"%s\"><i>On %s UTC %h", zId, zDate, login_name()); if( zUser[0] && fossil_strcmp(zUser,login_name()) ){ blob_appendf(p, " (claiming to be %h)", zUser); } blob_appendf(p, " added:</i><br />\n%s</div id=\"%s\">", zRemark, zId); }else if( fossil_strcmp(zMimetype, "text/x-markdown")==0 ){ blob_appendf(p, "\n\n------\n*On %s UTC %h", zDate, login_name()); |
︙ | ︙ | |||
801 802 803 804 805 806 807 | if( !goodCaptcha ){ @ <p class="generalError">Error: Incorrect security code.</p> } if( P("preview")!=0 ){ Blob preview; blob_zero(&preview); appendRemark(&preview, zMimetype); | | | | 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 | if( !goodCaptcha ){ @ <p class="generalError">Error: Incorrect security code.</p> } if( P("preview")!=0 ){ Blob preview; blob_zero(&preview); appendRemark(&preview, zMimetype); @ Preview:<hr /> wiki_render_by_mimetype(&preview, zMimetype); @ <hr /> blob_reset(&preview); } zUser = PD("u", g.zLogin); form_begin(0, "%R/wikiappend"); login_insert_csrf_secret(); @ <input type="hidden" name="name" value="%h(zPageName)" /> @ <input type="hidden" name="mimetype" value="%h(zMimetype)" /> |
︙ | ︙ | |||
949 950 951 952 953 954 955 | Stmt q; int showAll = P("all")!=0; login_check_credentials(); if( !g.perm.RdWiki ){ login_needed(g.anon.RdWiki); return; } style_header("Available Wiki Pages"); if( showAll ){ | | | | 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 | Stmt q; int showAll = P("all")!=0; login_check_credentials(); if( !g.perm.RdWiki ){ login_needed(g.anon.RdWiki); return; } style_header("Available Wiki Pages"); if( showAll ){ style_submenu_element("Active", "%s/wcontent", g.zTop); }else{ style_submenu_element("All", "%s/wcontent?all=1", g.zTop); } wiki_standard_submenu(W_ALL_BUT(W_LIST)); @ <ul> wiki_prepare_page_list(&q); while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); int size = db_column_int(&q, 1); |
︙ | ︙ | |||
1165 1166 1167 1168 1169 1170 1171 | ** COMMAND: wiki* ** ** Usage: %fossil wiki (export|create|commit|list) WikiName ** ** Run various subcommands to work with wiki entries or tech notes. ** ** %fossil wiki export PAGENAME ?FILE? | | | | | | | 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 | ** COMMAND: wiki* ** ** Usage: %fossil wiki (export|create|commit|list) WikiName ** ** Run various subcommands to work with wiki entries or tech notes. ** ** %fossil wiki export PAGENAME ?FILE? ** %fossil wiki export ?FILE? -t|--technote DATETIME|TECHNOTE-ID ** ** Sends the latest version of either a wiki page or of a tech note ** to the given file or standard output. ** If PAGENAME is provided, the wiki page will be output. For ** a tech note either DATETIME or TECHNOTE-ID must be specified. If ** DATETIME is used, the most recently modified tech note with that ** DATETIME will be sent. ** ** %fossil wiki (create|commit) PAGENAME ?FILE? ?OPTIONS? ** ** Create a new or commit changes to an existing wiki page or ** technote from FILE or from standard input. PAGENAME is the ** name of the wiki entry or the timeline comment of the ** technote. ** ** Options: ** -M|--mimetype TEXT-FORMAT The mime type of the update. ** Defaults to the type used by |
︙ | ︙ | |||
1206 1207 1208 1209 1210 1211 1212 | ** --technote-bgcolor COLOR The color used for the technote ** on the timeline. ** ** %fossil wiki list ?OPTIONS? ** %fossil wiki ls ?OPTIONS? ** ** Lists all wiki entries, one per line, ordered | | > > > > > > | 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 | ** --technote-bgcolor COLOR The color used for the technote ** on the timeline. ** ** %fossil wiki list ?OPTIONS? ** %fossil wiki ls ?OPTIONS? ** ** Lists all wiki entries, one per line, ordered ** case-insensitively by name. ** ** Options: ** -t|--technote Technotes will be listed instead of ** pages. The technotes will be in order ** of timestamp with the most recent ** first. ** -s|--show-technote-ids The id of the tech note will be listed ** along side the timestamp. The tech note ** id will be the first word on each line. ** This option only applies if the ** --technote option is also specified. ** ** DATETIME may be "now" or "YYYY-MM-DDTHH:MM:SS.SSS". If in ** year-month-day form, it may be truncated, the "T" may be replaced by ** a space, and it may also name a timezone offset from UTC as "-HH:MM" ** (westward) or "+HH:MM" (eastward). Either no timezone suffix or "Z" ** means UTC. ** */ void wiki_cmd(void){ int n; db_find_and_open_repository(0, 0); if( g.argc<3 ){ goto wiki_cmd_usage; |
︙ | ︙ | |||
1264 1265 1266 1267 1268 1269 1270 | } zFile = (g.argc==4) ? "-" : g.argv[4]; }else{ if( (g.argc!=3) && (g.argc!=4) ){ usage("export ?FILE? --technote DATETIME|TECHNOTE-ID"); } rid = wiki_technote_to_rid(zETime); | | | 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 | } zFile = (g.argc==4) ? "-" : g.argv[4]; }else{ if( (g.argc!=3) && (g.argc!=4) ){ usage("export ?FILE? --technote DATETIME|TECHNOTE-ID"); } rid = wiki_technote_to_rid(zETime); if ( rid==-1 ){ fossil_fatal("ambiguous tech note id: %s", zETime); } if( (pWiki = manifest_get(rid, CFTYPE_EVENT, 0))!=0 ){ zBody = pWiki->zWiki; } if( zBody==0 ){ fossil_fatal("technote [%s] not found",zETime); |
︙ | ︙ | |||
1287 1288 1289 1290 1291 1292 1293 | blob_reset(&body); manifest_destroy(pWiki); return; }else if( strncmp(g.argv[2],"commit",n)==0 || strncmp(g.argv[2],"create",n)==0 ){ const char *zPageName; /* page name */ Blob content; /* Input content */ | | | | | | | | | | 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 | blob_reset(&body); manifest_destroy(pWiki); return; }else if( strncmp(g.argv[2],"commit",n)==0 || strncmp(g.argv[2],"create",n)==0 ){ const char *zPageName; /* page name */ Blob content; /* Input content */ int rid = 0; Manifest *pWiki = 0; /* Parsed wiki page content */ const char *zMimeType = find_option("mimetype", "M", 1); const char *zETime = find_option("technote", "t", 1); const char *zTags = find_option("technote-tags", NULL, 1); const char *zClr = find_option("technote-bgcolor", NULL, 1); if( g.argc!=4 && g.argc!=5 ){ usage("commit|create PAGENAME ?FILE? [--mimetype TEXT-FORMAT]" " [--technote DATETIME] [--technote-tags TAGS]" " [--technote-bgcolor COLOR]"); } zPageName = g.argv[3]; if( g.argc==4 ){ blob_read_from_channel(&content, stdin, -1); }else{ blob_read_from_file(&content, g.argv[4]); } if( !zMimeType || !*zMimeType ){ /* Try to deduce the mime type based on the prior version. */ if ( !zETime ){ rid = db_int(0, "SELECT x.rid FROM tag t, tagxref x" " WHERE x.tagid=t.tagid AND t.tagname='wiki-%q'" " ORDER BY x.mtime DESC LIMIT 1", zPageName ); if( rid>0 && (pWiki = manifest_get(rid, CFTYPE_WIKI, 0))!=0 && (pWiki->zMimetype && *pWiki->zMimetype) ){ zMimeType = pWiki->zMimetype; } }else{ rid = wiki_technote_to_rid(zETime); if( rid>0 && (pWiki = manifest_get(rid, CFTYPE_EVENT, 0))!=0 && (pWiki->zMimetype && *pWiki->zMimetype) ){ zMimeType = pWiki->zMimetype; } } }else{ zMimeType = wiki_filter_mimetypes(zMimeType); } if( g.argv[2][1]=='r' && rid>0 ){ if ( !zETime ){ fossil_fatal("wiki page %s already exists", zPageName); }else{ /* Creating a tech note with same timestamp is permitted and should create a new tech note */ rid = 0; } }else if( g.argv[2][1]=='o' && rid == 0 ){ if ( !zETime ){ fossil_fatal("no such wiki page: %s", zPageName); }else{ fossil_fatal("no such tech note: %s", zETime); } |
︙ | ︙ | |||
1394 1395 1396 1397 1398 1399 1400 | " FROM event e, tag t" " WHERE e.type='e'" " AND e.tagid IS NOT NULL" " AND t.tagid=e.tagid" " ORDER BY e.mtime DESC /*sort*/" ); } | | | > > > > > > > > > > > > > > > > > > > | 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 | " FROM event e, tag t" " WHERE e.type='e'" " AND e.tagid IS NOT NULL" " AND t.tagid=e.tagid" " ORDER BY e.mtime DESC /*sort*/" ); } while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); if( showIds ){ const char *zUuid = db_column_text(&q, 1); fossil_print("%s ",zUuid); } fossil_print( "%s\n",zName); } db_finalize(&q); }else{ goto wiki_cmd_usage; } return; wiki_cmd_usage: usage("export|create|commit|list ..."); } /* ** COMMAND: test-markdown-render ** ** Usage: %fossil test-markdown-render FILE ** ** Render markdown wiki from FILE to stdout. ** */ void test_markdown_render(void){ Blob in, out; db_find_and_open_repository(0,0); verify_all_options(); if( g.argc!=3 ) usage("FILE"); blob_zero(&out); blob_read_from_file(&in, g.argv[2]); markdown_to_html(&in, 0, &out); blob_write_to_file(&out, "-"); } |
Changes to src/wikiformat.c.
︙ | ︙ | |||
143 144 145 146 147 148 149 | /* ** Use binary search to locate a tag in the aAttribute[] table. */ static int findAttr(const char *z){ int i, c, first, last; first = 1; | | | 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 | /* ** Use binary search to locate a tag in the aAttribute[] table. */ static int findAttr(const char *z){ int i, c, first, last; first = 1; last = count(aAttribute) - 1; while( first<=last ){ i = (first+last)/2; c = fossil_strcmp(aAttribute[i].zName, z); if( c==0 ){ return i; }else if( c<0 ){ first = i+1; |
︙ | ︙ | |||
370 371 372 373 374 375 376 | { "var", MARKUP_VAR, MUTYPE_FONT, AMSK_STYLE }, { "verbatim", MARKUP_VERBATIM, MUTYPE_SPECIAL, AMSK_ID|AMSK_TYPE }, }; void show_allowed_wiki_markup( void ){ int i; /* loop over allowedAttr */ | | | | 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 | { "var", MARKUP_VAR, MUTYPE_FONT, AMSK_STYLE }, { "verbatim", MARKUP_VERBATIM, MUTYPE_SPECIAL, AMSK_ID|AMSK_TYPE }, }; void show_allowed_wiki_markup( void ){ int i; /* loop over allowedAttr */ for( i=1 ; i<=count(aMarkup) - 1 ; i++ ){ @ <%s(aMarkup[i].zName)> } } /* ** Use binary search to locate a tag in the aMarkup[] table. */ static int findTag(const char *z){ int i, c, first, last; first = 1; last = count(aMarkup) - 1; while( first<=last ){ i = (first+last)/2; c = fossil_strcmp(aMarkup[i].zName, z); if( c==0 ){ assert( aMarkup[i].iCode==i ); return i; }else if( c<0 ){ |
︙ | ︙ | |||
945 946 947 948 949 950 951 | while( z[i] && z[i]!='<' ){ i++; } if( fossil_strnicmp(&z[i], "</a>",4)!=0 ) return 0; for(j=*pN; fossil_isspace(z[j]); j++){} zTag = mprintf("%.*s", i-j, &z[j]); j = (int)strlen(zTag); while( j>0 && fossil_isspace(zTag[j-1]) ){ j--; } if( j==0 ) return 0; | | | 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 | while( z[i] && z[i]!='<' ){ i++; } if( fossil_strnicmp(&z[i], "</a>",4)!=0 ) return 0; for(j=*pN; fossil_isspace(z[j]); j++){} zTag = mprintf("%.*s", i-j, &z[j]); j = (int)strlen(zTag); while( j>0 && fossil_isspace(zTag[j-1]) ){ j--; } if( j==0 ) return 0; style_submenu_element(zTag, "%s", zHref); *pN = i+4; return 1; } /* ** Pop a single element off of the stack. As the element is popped, ** output its end tag if it is not a </div> tag. |
︙ | ︙ | |||
2185 2186 2187 2188 2189 2190 2191 | static const struct { int n; char c; char *z; } aEntity[] = { { 5, '&', "&" }, { 4, '<', "<" }, { 4, '>', ">" }, { 6, ' ', " " }, }; int jj; | | | 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 | static const struct { int n; char c; char *z; } aEntity[] = { { 5, '&', "&" }, { 4, '<', "<" }, { 4, '>', ">" }, { 6, ' ', " " }, }; int jj; for(jj=0; jj<count(aEntity); jj++){ if( aEntity[jj].n==n && strncmp(aEntity[jj].z,zIn,n)==0 ){ c = aEntity[jj].c; break; } } } if( fossil_isspace(c) ){ |
︙ | ︙ |
Changes to src/winhttp.c.
︙ | ︙ | |||
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 | ** for windows. It also implements a Windows Service which allows the HTTP ** server to be run without any user logged on. */ #include "config.h" #ifdef _WIN32 /* This code is for win32 only */ #include <windows.h> #include "winhttp.h" /* ** The HttpRequest structure holds information about each incoming ** HTTP request. */ typedef struct HttpRequest HttpRequest; struct HttpRequest { int id; /* ID counter */ SOCKET s; /* Socket on which to receive data */ SOCKADDR_IN addr; /* Address from which data is coming */ int flags; /* Flags passed to win32_http_server() */ | > > > > > > > > > > > > > > > | | 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 | ** for windows. It also implements a Windows Service which allows the HTTP ** server to be run without any user logged on. */ #include "config.h" #ifdef _WIN32 /* This code is for win32 only */ #include <windows.h> #include <process.h> #include "winhttp.h" /* ** The HttpServer structure holds information about an instance of ** the HTTP server itself. */ typedef struct HttpServer HttpServer; struct HttpServer { HANDLE hStoppedEvent; /* Event to signal when server is stopped, ** must be closed by callee. */ char *zStopper; /* The stopper file name, must be freed by ** callee. */ SOCKET listener; /* Socket on which the server is listening, ** may be closed by callee. */ }; /* ** The HttpRequest structure holds information about each incoming ** HTTP request. */ typedef struct HttpRequest HttpRequest; struct HttpRequest { int id; /* ID counter */ SOCKET s; /* Socket on which to receive data */ SOCKADDR_IN addr; /* Address from which data is coming */ int flags; /* Flags passed to win32_http_server() */ const char *zOptions; /* --baseurl, --notfound, --localauth, --th-trace */ }; /* ** Prefix for a temporary file. */ static char *zTempPrefix; |
︙ | ︙ | |||
67 68 69 70 71 72 73 74 75 76 77 78 79 80 | static NORETURN void winhttp_fatal( const char *zOp, const char *zService, const char *zErr ){ fossil_fatal("unable to %s service '%s': %s", zOp, zService, zErr); } /* ** Process a single incoming HTTP request. */ static void win32_http_request(void *pAppData){ HttpRequest *p = (HttpRequest*)pAppData; FILE *in = 0, *out = 0; | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 | static NORETURN void winhttp_fatal( const char *zOp, const char *zService, const char *zErr ){ fossil_fatal("unable to %s service '%s': %s", zOp, zService, zErr); } /* ** Make sure the server stops as soon as possible after the stopper file ** is found. If there is no stopper file name, do nothing. */ static void win32_server_stopper(void *pAppData){ HttpServer *p = (HttpServer*)pAppData; if( p!=0 ){ HANDLE hStoppedEvent = p->hStoppedEvent; const char *zStopper = p->zStopper; SOCKET listener = p->listener; if( hStoppedEvent!=NULL && zStopper!=0 && listener!=INVALID_SOCKET ){ while( 1 ){ DWORD dwResult = WaitForMultipleObjectsEx(1, &hStoppedEvent, FALSE, 1000, TRUE); if( dwResult!=WAIT_IO_COMPLETION && dwResult!=WAIT_TIMEOUT ){ /* The event is either invalid, signaled, or abandoned. Bail ** out now because those conditions should indicate the parent ** thread is dead or dying. */ break; } if( file_size(zStopper)>=0 ){ /* The stopper file has been found. Attempt to close the server ** listener socket now and then exit. */ closesocket(listener); p->listener = INVALID_SOCKET; break; } } } if( hStoppedEvent!=NULL ){ CloseHandle(hStoppedEvent); p->hStoppedEvent = NULL; } if( zStopper!=0 ){ fossil_free(p->zStopper); p->zStopper = 0; } fossil_free(p); } } /* ** Process a single incoming HTTP request. */ static void win32_http_request(void *pAppData){ HttpRequest *p = (HttpRequest*)pAppData; FILE *in = 0, *out = 0; |
︙ | ︙ | |||
160 161 162 163 164 165 166 | end_request: if( out ) fclose(out); if( in ) fclose(in); closesocket(p->s); file_delete(zRequestFName); file_delete(zReplyFName); file_delete(zCmdFName); | | | 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 | end_request: if( out ) fclose(out); if( in ) fclose(in); closesocket(p->s); file_delete(zRequestFName); file_delete(zReplyFName); file_delete(zCmdFName); fossil_free(p); } /* ** Process a single incoming SCGI request. */ static void win32_scgi_request(void *pAppData){ HttpRequest *p = (HttpRequest*)pAppData; |
︙ | ︙ | |||
222 223 224 225 226 227 228 | end_request: if( out ) fclose(out); if( in ) fclose(in); closesocket(p->s); file_delete(zRequestFName); file_delete(zReplyFName); | | > > > > > < > > > > > > > > > > > | 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 | end_request: if( out ) fclose(out); if( in ) fclose(in); closesocket(p->s); file_delete(zRequestFName); file_delete(zReplyFName); fossil_free(p); } /* ** Start a listening socket and process incoming HTTP requests on ** that socket. */ void win32_http_server( int mnPort, int mxPort, /* Range of allowed TCP port numbers */ const char *zBrowser, /* Command to launch browser. (Or NULL) */ const char *zStopper, /* Stop server when this file is exists (Or NULL) */ const char *zBaseUrl, /* The --baseurl option, or NULL */ const char *zNotFound, /* The --notfound option, or NULL */ const char *zFileGlob, /* The --fileglob option, or NULL */ const char *zIpAddr, /* Bind to this IP address, if not NULL */ int flags /* One or more HTTP_SERVER_ flags */ ){ HANDLE hStoppedEvent; WSADATA wd; SOCKET s = INVALID_SOCKET; SOCKADDR_IN addr; int idCnt = 0; int iPort = mnPort; Blob options; wchar_t zTmpPath[MAX_PATH]; #if USE_SEE const char *zSavedKey = 0; size_t savedKeySize = 0; #endif blob_zero(&options); if( zBaseUrl ){ blob_appendf(&options, " --baseurl %s", zBaseUrl); } if( zNotFound ){ blob_appendf(&options, " --notfound %s", zNotFound); } if( zFileGlob ){ blob_appendf(&options, " --files-urlenc %T", zFileGlob); } if( g.useLocalauth ){ blob_appendf(&options, " --localauth"); } if( g.thTrace ){ blob_appendf(&options, " --th-trace"); } if( flags & HTTP_SERVER_REPOLIST ){ blob_appendf(&options, " --repolist"); } #if USE_SEE zSavedKey = db_get_saved_encryption_key(); savedKeySize = db_get_saved_encryption_key_size(); if( zSavedKey!=0 && savedKeySize>0 ){ blob_appendf(&options, " --usepidkey %lu:%p:%u", GetCurrentProcessId(), zSavedKey, savedKeySize); } #endif if( WSAStartup(MAKEWORD(1,1), &wd) ){ fossil_fatal("unable to initialize winsock"); } while( iPort<=mxPort ){ s = socket(AF_INET, SOCK_STREAM, 0); if( s==INVALID_SOCKET ){ fossil_fatal("unable to create a socket"); |
︙ | ︙ | |||
318 319 320 321 322 323 324 325 326 327 328 329 330 | (flags&HTTP_SERVER_SCGI)!=0?"SCGI":"HTTP", iPort); if( zBrowser ){ zBrowser = mprintf(zBrowser /*works-like:"%d"*/, iPort); fossil_print("Launch webbrowser: %s\n", zBrowser); fossil_system(zBrowser); } fossil_print("Type Ctrl-C to stop the HTTP server\n"); /* Set the service status to running and pass the listener socket to the ** service handling procedures. */ win32_http_service_running(s); for(;;){ SOCKET client; SOCKADDR_IN client_addr; | > > > > > > > > > > > > > > > > > > | < < | | | | | | | | > > | 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 | (flags&HTTP_SERVER_SCGI)!=0?"SCGI":"HTTP", iPort); if( zBrowser ){ zBrowser = mprintf(zBrowser /*works-like:"%d"*/, iPort); fossil_print("Launch webbrowser: %s\n", zBrowser); fossil_system(zBrowser); } fossil_print("Type Ctrl-C to stop the HTTP server\n"); /* Create an event used to signal when this server is exiting. */ hStoppedEvent = CreateEvent(NULL, TRUE, FALSE, NULL); assert( hStoppedEvent!=NULL ); /* If there is a stopper file name, start the dedicated thread now. ** It will attempt to close the listener socket within 1 second of ** the stopper file being created. */ if( zStopper ){ HttpServer *pServer = fossil_malloc(sizeof(HttpServer)); memset(pServer, 0, sizeof(HttpServer)); DuplicateHandle(GetCurrentProcess(), hStoppedEvent, GetCurrentProcess(), &pServer->hStoppedEvent, 0, FALSE, DUPLICATE_SAME_ACCESS); assert( pServer->hStoppedEvent!=NULL ); pServer->zStopper = fossil_strdup(zStopper); pServer->listener = s; file_delete(zStopper); _beginthread(win32_server_stopper, 0, (void*)pServer); } /* Set the service status to running and pass the listener socket to the ** service handling procedures. */ win32_http_service_running(s); for(;;){ SOCKET client; SOCKADDR_IN client_addr; HttpRequest *pRequest; int len = sizeof(client_addr); int wsaError; client = accept(s, (struct sockaddr*)&client_addr, &len); if( client==INVALID_SOCKET ){ /* If the service control handler has closed the listener socket, ** cleanup and return, otherwise report a fatal error. */ wsaError = WSAGetLastError(); if( (wsaError==WSAEINTR) || (wsaError==WSAENOTSOCK) ){ WSACleanup(); return; }else{ closesocket(s); WSACleanup(); fossil_fatal("error from accept()"); } } pRequest = fossil_malloc(sizeof(HttpRequest)); pRequest->id = ++idCnt; pRequest->s = client; pRequest->addr = client_addr; pRequest->flags = flags; pRequest->zOptions = blob_str(&options); if( flags & HTTP_SERVER_SCGI ){ _beginthread(win32_scgi_request, 0, (void*)pRequest); }else{ _beginthread(win32_http_request, 0, (void*)pRequest); } } closesocket(s); WSACleanup(); SetEvent(hStoppedEvent); CloseHandle(hStoppedEvent); } /* ** The HttpService structure is used to pass information to the service main ** function and to the service control handler function. */ typedef struct HttpService HttpService; |
︙ | ︙ | |||
575 576 577 578 579 580 581 | }else{ fossil_fatal("error from StartServiceCtrlDispatcher()"); } } return 0; } | | > > | 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 | }else{ fossil_fatal("error from StartServiceCtrlDispatcher()"); } } return 0; } /* Duplicate #ifdef needed for mkindex */ #ifdef _WIN32 /* ** COMMAND: winsrv* ** ** Usage: %fossil winsrv METHOD ?SERVICE-NAME? ?OPTIONS? ** ** Where METHOD is one of: create delete show start stop. ** ** The winsrv command manages Fossil as a Windows service. This allows |
︙ | ︙ | |||
1024 1025 1026 1027 1028 1029 1030 | }else { fossil_fatal("METHOD should be one of:" " create delete show start stop"); } return; } | > | | 1115 1116 1117 1118 1119 1120 1121 1122 1123 | }else { fossil_fatal("METHOD should be one of:" " create delete show start stop"); } return; } #endif /* _WIN32 -- dupe needed for mkindex */ #endif /* _WIN32 -- This code is for win32 only */ |
Changes to src/xfer.c.
1 2 3 4 5 6 | /* ** Copyright (c) 2007 D. Richard Hipp ** ** This program is free software; you can redistribute it and/or ** modify it under the terms of the Simplified BSD License (also ** known as the "2-Clause License" or "FreeBSD License".) | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 | /* ** Copyright (c) 2007 D. Richard Hipp ** ** This program is free software; you can redistribute it and/or ** modify it under the terms of the Simplified BSD License (also ** known as the "2-Clause License" or "FreeBSD License".) ** ** This program is distributed in the hope that it will be useful, ** but without any warranty; without even the implied warranty of ** merchantability or fitness for a particular purpose. ** ** Author contact information: ** drh@hwaci.com ** http://www.hwaci.com/drh/ |
︙ | ︙ | |||
280 281 282 283 284 285 286 287 288 289 290 291 292 293 | rid = content_put_ex(&content, blob_str(&pXfer->aToken[1]), srcid, szC, isPriv); Th_AppendToList(pzUuidList, pnUuidList, blob_str(&pXfer->aToken[1]), blob_size(&pXfer->aToken[1])); remote_has(rid); blob_reset(&content); } /* ** Try to send a file as a delta against its parent. ** If successful, return the number of bytes in the delta. ** If we cannot generate an appropriate delta, then send ** nothing and return zero. ** | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 | rid = content_put_ex(&content, blob_str(&pXfer->aToken[1]), srcid, szC, isPriv); Th_AppendToList(pzUuidList, pnUuidList, blob_str(&pXfer->aToken[1]), blob_size(&pXfer->aToken[1])); remote_has(rid); blob_reset(&content); } /* ** The aToken[0..nToken-1] blob array is a parse of a "uvfile" line ** message. This routine finishes parsing that message and adds the ** unversioned file to the "unversioned" table. ** ** The file line is in one of the following two forms: ** ** uvfile NAME MTIME HASH SIZE FLAGS ** uvfile NAME MTIME HASH SIZE FLAGS \n CONTENT ** ** If the 0x0001 bit of FLAGS is set, that means the file has been ** deleted, SIZE is zero, the HASH is "-", and the "\n CONTENT" is omitted. ** ** SIZE is the number of bytes of CONTENT. The CONTENT is uncompressed. ** HASH is the SHA1 hash of CONTENT. ** ** If the 0x0004 bit of FLAGS is set, that means the CONTENT is omitted. ** The sender might have omitted the content because it is too big to ** transmit, or because it is unchanged and this record exists purely ** to update the MTIME. */ static void xfer_accept_unversioned_file(Xfer *pXfer, int isWriter){ sqlite3_int64 mtime; /* The MTIME */ Blob *pHash; /* The HASH value */ int sz; /* The SIZE */ int flags; /* The FLAGS */ Blob content; /* The CONTENT */ Blob hash; /* Hash computed from CONTENT to compare with HASH */ Blob x; /* Compressed content */ Stmt q; /* SQL statements for comparison and insert */ int isDelete; /* HASH is "-" indicating this is a delete */ int nullContent; /* True of CONTENT is NULL */ int iStatus; /* Result from unversioned_status() */ pHash = &pXfer->aToken[3]; if( pXfer->nToken==5 || !blob_is_filename(&pXfer->aToken[1]) || !blob_is_int64(&pXfer->aToken[2], &mtime) || (!blob_eq(pHash,"-") && !blob_is_uuid(pHash)) || !blob_is_int(&pXfer->aToken[4], &sz) || !blob_is_int(&pXfer->aToken[5], &flags) ){ blob_appendf(&pXfer->err, "malformed uvfile line"); return; } blob_init(&content, 0, 0); blob_init(&hash, 0, 0); blob_init(&x, 0, 0); if( sz>0 && (flags & 0x0005)==0 ){ blob_extract(pXfer->pIn, sz, &content); nullContent = 0; sha1sum_blob(&content, &hash); if( blob_compare(&hash, pHash)!=0 ){ blob_appendf(&pXfer->err, "in uvfile line, HASH does not match CONTENT"); goto end_accept_unversioned_file; } }else{ nullContent = 1; } /* The isWriter flag must be true in order to land the new file */ if( !isWriter ) goto end_accept_unversioned_file; /* Make sure we have a valid g.rcvid marker */ content_rcvid_init(0); /* Check to see if current content really should be overwritten. Ideally, ** a uvfile card should never have been sent unless the overwrite should ** occur. But do not trust the sender. Double-check. */ iStatus = unversioned_status(blob_str(&pXfer->aToken[1]), mtime, blob_str(pHash)); if( iStatus>=3 ) goto end_accept_unversioned_file; /* Store the content */ isDelete = blob_eq(pHash, "-"); if( isDelete ){ db_prepare(&q, "UPDATE unversioned" " SET rcvid=:rcvid, mtime=:mtime, hash=NULL," " sz=0, encoding=0, content=NULL" " WHERE name=:name" ); db_bind_int(&q, ":rcvid", g.rcvid); }else if( iStatus==4 ){ db_prepare(&q, "UPDATE unversioned SET mtime=:mtime WHERE name=:name"); }else{ db_prepare(&q, "REPLACE INTO unversioned(name,rcvid,mtime,hash,sz,encoding,content)" " VALUES(:name,:rcvid,:mtime,:hash,:sz,:encoding,:content)" ); db_bind_int(&q, ":rcvid", g.rcvid); db_bind_text(&q, ":hash", blob_str(pHash)); db_bind_int(&q, ":sz", blob_size(&content)); if( !nullContent ){ blob_compress(&content, &x); if( blob_size(&x) < 0.8*blob_size(&content) ){ db_bind_blob(&q, ":content", &x); db_bind_int(&q, ":encoding", 1); }else{ db_bind_blob(&q, ":content", &content); db_bind_int(&q, ":encoding", 0); } }else{ db_bind_int(&q, ":encoding", 0); } } db_bind_text(&q, ":name", blob_str(&pXfer->aToken[1])); db_bind_int64(&q, ":mtime", mtime); db_step(&q); db_finalize(&q); db_unset("uv-hash", 0); end_accept_unversioned_file: blob_reset(&x); blob_reset(&content); blob_reset(&hash); } /* ** Try to send a file as a delta against its parent. ** If successful, return the number of bytes in the delta. ** If we cannot generate an appropriate delta, then send ** nothing and return zero. ** |
︙ | ︙ | |||
522 523 524 525 526 527 528 529 530 531 532 533 534 535 | } if( !isPrivate && srcIsPrivate ){ blob_reset(&fullContent); } } db_reset(&q1); } /* ** Send a gimme message for every phantom. ** ** Except: do not request shunned artifacts. And do not request ** private artifacts if we are not doing a private transfer. */ | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 | } if( !isPrivate && srcIsPrivate ){ blob_reset(&fullContent); } } db_reset(&q1); } /* ** Send the unversioned file identified by zName by generating the ** appropriate "uvfile" card. ** ** uvfile NAME MTIME HASH SIZE FLAGS \n CONTENT ** ** If the noContent flag is set, omit the CONTENT and set the 0x0004 ** flag in FLAGS. */ static void send_unversioned_file( Xfer *pXfer, /* Transfer context */ const char *zName, /* Name of unversioned file to be sent */ int noContent /* True to omit the content */ ){ Stmt q1; if( blob_size(pXfer->pOut)>=pXfer->mxSend ) noContent = 1; if( noContent ){ db_prepare(&q1, "SELECT mtime, hash, encoding, sz FROM unversioned WHERE name=%Q", zName ); }else{ db_prepare(&q1, "SELECT mtime, hash, encoding, sz, content FROM unversioned" " WHERE name=%Q", zName ); } if( db_step(&q1)==SQLITE_ROW ){ sqlite3_int64 mtime = db_column_int64(&q1, 0); const char *zHash = db_column_text(&q1, 1); if( blob_size(pXfer->pOut)>=pXfer->mxSend ){ /* If we have already reached the send size limit, send a (short) ** uvigot card rather than a uvfile card. This only happens on the ** server side. The uvigot card will provoke the client to resend ** another uvgimme on the next cycle. */ blob_appendf(pXfer->pOut, "uvigot %s %lld %s %d\n", zName, mtime, zHash, db_column_int(&q1,3)); }else{ blob_appendf(pXfer->pOut, "uvfile %s %lld", zName, mtime); if( zHash==0 ){ blob_append(pXfer->pOut, " - 0 1\n", -1); }else if( noContent ){ blob_appendf(pXfer->pOut, " %s %d 4\n", zHash, db_column_int(&q1,3)); }else{ Blob content; blob_init(&content, 0, 0); db_column_blob(&q1, 4, &content); if( db_column_int(&q1, 2) ){ blob_uncompress(&content, &content); } blob_appendf(pXfer->pOut, " %s %d 0\n", zHash, blob_size(&content)); blob_append(pXfer->pOut, blob_buffer(&content), blob_size(&content)); blob_reset(&content); } } } db_finalize(&q1); } /* ** Send a gimme message for every phantom. ** ** Except: do not request shunned artifacts. And do not request ** private artifacts if we are not doing a private transfer. */ |
︙ | ︙ | |||
591 592 593 594 595 596 597 | */ int check_login(Blob *pLogin, Blob *pNonce, Blob *pSig){ Stmt q; int rc = -1; char *zLogin = blob_terminate(pLogin); defossilize(zLogin); | | > > | 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 | */ int check_login(Blob *pLogin, Blob *pNonce, Blob *pSig){ Stmt q; int rc = -1; char *zLogin = blob_terminate(pLogin); defossilize(zLogin); if( fossil_strcmp(zLogin, "nobody")==0 || fossil_strcmp(zLogin,"anonymous")==0 ){ return 0; /* Anybody is allowed to sync as "nobody" or "anonymous" */ } if( fossil_strcmp(P("REMOTE_USER"), zLogin)==0 && db_get_boolean("remote_user_ok",0) ){ return 0; /* Accept Basic Authorization */ } db_prepare(&q, |
︙ | ︙ | |||
831 832 833 834 835 836 837 838 839 840 841 842 843 844 | configure_render_special_name(zName, &content); blob_appendf(pXfer->pOut, "config %s %d\n%s\n", zName, blob_size(&content), blob_str(&content)); blob_reset(&content); } } /* ** Called when there is an attempt to transfer private content to and ** from a server without authorization. */ static void server_private_xfer_not_authorized(void){ @ error not\sauthorized\sto\ssync\sprivate\scontent } | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 | configure_render_special_name(zName, &content); blob_appendf(pXfer->pOut, "config %s %d\n%s\n", zName, blob_size(&content), blob_str(&content)); blob_reset(&content); } } /* ** pXfer is a "pragma uv-hash HASH" card. ** ** If HASH is different from the unversioned content hash on this server, ** then send a bunch of uvigot cards, one for each entry unversioned file ** on this server. */ static void send_unversioned_catalog(Xfer *pXfer){ unversioned_schema(); if( !blob_eq(&pXfer->aToken[2], unversioned_content_hash(0)) ){ int nUvIgot = 0; Stmt uvq; db_prepare(&uvq, "SELECT name, mtime, hash, sz FROM unversioned" ); while( db_step(&uvq)==SQLITE_ROW ){ const char *zName = db_column_text(&uvq,0); sqlite3_int64 mtime = db_column_int64(&uvq,1); const char *zHash = db_column_text(&uvq,2); int sz = db_column_int(&uvq,3); nUvIgot++; if( zHash==0 ){ sz = 0; zHash = "-"; } blob_appendf(pXfer->pOut, "uvigot %s %lld %s %d\n", zName, mtime, zHash, sz); } db_finalize(&uvq); } } /* ** Called when there is an attempt to transfer private content to and ** from a server without authorization. */ static void server_private_xfer_not_authorized(void){ @ error not\sauthorized\sto\ssync\sprivate\scontent } |
︙ | ︙ | |||
938 939 940 941 942 943 944 945 946 947 948 949 950 951 | char *zNow; int rc; const char *zScript = 0; char *zUuidList = 0; int nUuidList = 0; char **pzUuidList = 0; int *pnUuidList = 0; if( fossil_strcmp(PD("REQUEST_METHOD","POST"),"POST") ){ fossil_redirect_home(); } g.zLogin = "anonymous"; login_set_anon_nobody_capabilities(); login_check_credentials(); | > | 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 | char *zNow; int rc; const char *zScript = 0; char *zUuidList = 0; int nUuidList = 0; char **pzUuidList = 0; int *pnUuidList = 0; int uvCatalogSent = 0; if( fossil_strcmp(PD("REQUEST_METHOD","POST"),"POST") ){ fossil_redirect_home(); } g.zLogin = "anonymous"; login_set_anon_nobody_capabilities(); login_check_credentials(); |
︙ | ︙ | |||
1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 | if( blob_size(&xfer.err) ){ cgi_reset_content(); @ error %T(blob_str(&xfer.err)) nErr++; break; } }else /* gimme UUID ** ** Client is requesting a file. Send it. */ if( blob_eq(&xfer.aToken[0], "gimme") && xfer.nToken==2 && blob_is_uuid(&xfer.aToken[1]) ){ nGimme++; if( isPull ){ int rid = rid_from_uuid(&xfer.aToken[1], 0, 0); if( rid ){ send_file(&xfer, rid, &xfer.aToken[1], deltaFlag); } } }else /* igot UUID ?ISPRIVATE? ** ** Client announces that it has a particular file. If the ISPRIVATE ** argument exists and is non-zero, then the file is a private file. */ if( xfer.nToken>=2 | > > > > > > > > > > > > > > > > > > > > > > > > > | 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 | if( blob_size(&xfer.err) ){ cgi_reset_content(); @ error %T(blob_str(&xfer.err)) nErr++; break; } }else /* uvfile NAME MTIME HASH SIZE FLAGS \n CONTENT ** ** Accept an unversioned file from the client. */ if( blob_eq(&xfer.aToken[0], "uvfile") ){ xfer_accept_unversioned_file(&xfer, g.perm.WrUnver); if( blob_size(&xfer.err) ){ cgi_reset_content(); @ error %T(blob_str(&xfer.err)) nErr++; break; } }else /* gimme UUID ** ** Client is requesting a file. Send it. */ if( blob_eq(&xfer.aToken[0], "gimme") && xfer.nToken==2 && blob_is_uuid(&xfer.aToken[1]) ){ nGimme++; if( isPull ){ int rid = rid_from_uuid(&xfer.aToken[1], 0, 0); if( rid ){ send_file(&xfer, rid, &xfer.aToken[1], deltaFlag); } } }else /* uvgimme NAME ** ** Client is requesting an unversioned file. Send it. */ if( blob_eq(&xfer.aToken[0], "uvgimme") && xfer.nToken==2 && blob_is_filename(&xfer.aToken[1]) ){ send_unversioned_file(&xfer, blob_str(&xfer.aToken[1]), 0); }else /* igot UUID ?ISPRIVATE? ** ** Client announces that it has a particular file. If the ISPRIVATE ** argument exists and is non-zero, then the file is a private file. */ if( xfer.nToken>=2 |
︙ | ︙ | |||
1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 | login_check_credentials(); if( !g.perm.Clone ){ cgi_reset_content(); @ push %s(db_get("server-code", "x")) %s(db_get("project-code", "x")) @ error not\sauthorized\sto\sclone nErr++; break; } if( xfer.nToken==3 && blob_is_int(&xfer.aToken[1], &iVers) && iVers>=2 ){ int seqno, max; if( iVers>=3 ){ | > > > > > | 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 | login_check_credentials(); if( !g.perm.Clone ){ cgi_reset_content(); @ push %s(db_get("server-code", "x")) %s(db_get("project-code", "x")) @ error not\sauthorized\sto\sclone nErr++; break; } if( db_get_boolean("uv-sync",0) && !uvCatalogSent ){ @ pragma uv-pull-only send_unversioned_catalog(&xfer); uvCatalogSent = 1; } if( xfer.nToken==3 && blob_is_int(&xfer.aToken[1], &iVers) && iVers>=2 ){ int seqno, max; if( iVers>=3 ){ |
︙ | ︙ | |||
1223 1224 1225 1226 1227 1228 1229 | } configure_receive(zName, &content, CONFIGSET_ALL); blob_reset(&content); blob_seek(xfer.pIn, 1, BLOB_SEEK_CUR); }else | < | 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 | } configure_receive(zName, &content, CONFIGSET_ALL); blob_reset(&content); blob_seek(xfer.pIn, 1, BLOB_SEEK_CUR); }else /* cookie TEXT ** ** A cookie contains a arbitrary-length argument that is server-defined. ** The argument must be encoded so as not to contain any whitespace. ** The server can optionally send a cookie to the client. The client ** might then return the same cookie back to the server on its next ** communication. The cookie might record information that helps |
︙ | ︙ | |||
1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 | /* pragma NAME VALUE... ** ** The client issue pragmas to try to influence the behavior of the ** server. These are requests only. Unknown pragmas are silently ** ignored. */ if( blob_eq(&xfer.aToken[0], "pragma") && xfer.nToken>=2 ){ /* pragma send-private ** ** If the user has the "x" privilege (which must be set explicitly - ** it is not automatic with "a" or "s") then this pragma causes ** private information to be pulled in addition to public records. */ if( blob_eq(&xfer.aToken[1], "send-private") ){ login_check_credentials(); if( !g.perm.Private ){ server_private_xfer_not_authorized(); }else{ xfer.syncPrivate = 1; } } /* pragma send-catalog ** ** Send igot cards for all known artifacts. */ if( blob_eq(&xfer.aToken[1], "send-catalog") ){ xfer.resync = 0x7fffffff; } }else /* Unknown message */ { cgi_reset_content(); @ error bad\scommand:\s%F(blob_str(&xfer.line)) | > > > > > > > > > > > > > > > > > > > > > > > | 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 | /* pragma NAME VALUE... ** ** The client issue pragmas to try to influence the behavior of the ** server. These are requests only. Unknown pragmas are silently ** ignored. */ if( blob_eq(&xfer.aToken[0], "pragma") && xfer.nToken>=2 ){ /* pragma send-private ** ** If the user has the "x" privilege (which must be set explicitly - ** it is not automatic with "a" or "s") then this pragma causes ** private information to be pulled in addition to public records. */ if( blob_eq(&xfer.aToken[1], "send-private") ){ login_check_credentials(); if( !g.perm.Private ){ server_private_xfer_not_authorized(); }else{ xfer.syncPrivate = 1; } } /* pragma send-catalog ** ** Send igot cards for all known artifacts. */ if( blob_eq(&xfer.aToken[1], "send-catalog") ){ xfer.resync = 0x7fffffff; } /* pragma uv-hash HASH ** ** The client wants to make sure that unversioned files are all synced. ** If the HASH does not match, send a complete catalog of ** "uvigot" cards. */ if( blob_eq(&xfer.aToken[1], "uv-hash") && blob_is_uuid(&xfer.aToken[2]) ){ if( !uvCatalogSent ){ if( g.perm.Read && g.perm.WrUnver ){ @ pragma uv-push-ok send_unversioned_catalog(&xfer); }else if( g.perm.Read ){ @ pragma uv-pull-only send_unversioned_catalog(&xfer); } } uvCatalogSent = 1; } }else /* Unknown message */ { cgi_reset_content(); @ error bad\scommand:\s%F(blob_str(&xfer.line)) |
︙ | ︙ | |||
1389 1390 1391 1392 1393 1394 1395 | static const char zBriefFormat[] = "Round-trips: %d Artifacts sent: %d received: %d\r"; #if INTERFACE /* ** Flag options for controlling client_sync() */ | | | | | | | > > > > > | 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 | static const char zBriefFormat[] = "Round-trips: %d Artifacts sent: %d received: %d\r"; #if INTERFACE /* ** Flag options for controlling client_sync() */ #define SYNC_PUSH 0x0001 /* push content client to server */ #define SYNC_PULL 0x0002 /* pull content server to client */ #define SYNC_CLONE 0x0004 /* clone the repository */ #define SYNC_PRIVATE 0x0008 /* Also transfer private content */ #define SYNC_VERBOSE 0x0010 /* Extra diagnostics */ #define SYNC_RESYNC 0x0020 /* --verily */ #define SYNC_UNVERSIONED 0x0040 /* Sync unversioned content */ #define SYNC_UV_REVERT 0x0080 /* Copy server unversioned to client */ #define SYNC_FROMPARENT 0x0100 /* Pull from the parent project */ #define SYNC_UV_TRACE 0x0200 /* Describe UV activities */ #define SYNC_UV_DRYRUN 0x0400 /* Do not actually exchange files */ #endif /* ** Floating-point absolute value */ static double fossil_fabs(double x){ return x>0.0 ? x : -x; |
︙ | ︙ | |||
1421 1422 1423 1424 1425 1426 1427 | unsigned configRcvMask, /* Receive these configuration items */ unsigned configSendMask /* Send these configuration items */ ){ int go = 1; /* Loop until zero */ int nCardSent = 0; /* Number of cards sent */ int nCardRcvd = 0; /* Number of cards received */ int nCycle = 0; /* Number of round trips to the server */ | | > > > > > | > > > > > > > > > > > | 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 | unsigned configRcvMask, /* Receive these configuration items */ unsigned configSendMask /* Send these configuration items */ ){ int go = 1; /* Loop until zero */ int nCardSent = 0; /* Number of cards sent */ int nCardRcvd = 0; /* Number of cards received */ int nCycle = 0; /* Number of round trips to the server */ int size; /* Size of a config value or uvfile */ int origConfigRcvMask; /* Original value of configRcvMask */ int nFileRecv; /* Number of files received */ int mxPhantomReq = 200; /* Max number of phantoms to request per comm */ const char *zCookie; /* Server cookie */ i64 nSent, nRcvd; /* Bytes sent and received (after compression) */ int cloneSeqno = 1; /* Sequence number for clones */ Blob send; /* Text we are sending to the server */ Blob recv; /* Reply we got back from the server */ Xfer xfer; /* Transfer data */ int pctDone; /* Percentage done with a message */ int lastPctDone = -1; /* Last displayed pctDone */ double rArrivalTime; /* Time at which a message arrived */ const char *zSCode = db_get("server-code", "x"); const char *zPCode = db_get("project-code", 0); int nErr = 0; /* Number of errors */ int nRoundtrip= 0; /* Number of HTTP requests */ int nArtifactSent = 0; /* Total artifacts sent */ int nArtifactRcvd = 0; /* Total artifacts received */ const char *zOpType = 0;/* Push, Pull, Sync, Clone */ double rSkew = 0.0; /* Maximum time skew */ int uvHashSent = 0; /* The "pragma uv-hash" message has been sent */ int uvDoPush = 0; /* Generate uvfile messages to send to server */ int nUvGimmeSent = 0; /* Number of uvgimme cards sent on this cycle */ int nUvFileRcvd = 0; /* Number of uvfile cards received on this cycle */ sqlite3_int64 mtime; /* Modification time on a UV file */ if( db_get_boolean("dont-push", 0) ) syncFlags &= ~SYNC_PUSH; if( (syncFlags & (SYNC_PUSH|SYNC_PULL|SYNC_CLONE|SYNC_UNVERSIONED))==0 && configRcvMask==0 && configSendMask==0 ) return 0; if( syncFlags & SYNC_FROMPARENT ){ configRcvMask = 0; configSendMask = 0; syncFlags &= ~(SYNC_PUSH); zPCode = db_get("parent-project-code", 0); if( zPCode==0 || db_get("parent-project-name",0)==0 ){ fossil_fatal("there is no parent project: set the 'parent-project-code'" " and 'parent-project-name' config parameters set in order" " to pull from a parent project"); } } transport_stats(0, 0, 1); socket_global_init(); memset(&xfer, 0, sizeof(xfer)); xfer.pIn = &recv; xfer.pOut = &send; xfer.mxSend = db_get_int("max-upload", 250000); |
︙ | ︙ | |||
1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 | origConfigRcvMask = 0; /* Send the send-private pragma if we are trying to sync private data */ if( syncFlags & SYNC_PRIVATE ){ blob_append(&send, "pragma send-private\n", -1); } /* ** Always begin with a clone, pull, or push message */ if( syncFlags & SYNC_CLONE ){ blob_appendf(&send, "clone 3 %d\n", cloneSeqno); syncFlags &= ~(SYNC_PUSH|SYNC_PULL); | > > > > > > > > > > > > > > > > > > > > | 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 | origConfigRcvMask = 0; /* Send the send-private pragma if we are trying to sync private data */ if( syncFlags & SYNC_PRIVATE ){ blob_append(&send, "pragma send-private\n", -1); } /* When syncing unversioned files, create a TEMP table in which to store ** the names of files that need to be sent from client to server. ** ** The initial assumption is that all unversioned files need to be sent ** to the other side. But "uvigot" cards received back from the remote ** side will normally cause many of these entries to be removed since they ** do not really need to be sent. */ if( (syncFlags & (SYNC_UNVERSIONED|SYNC_CLONE))!=0 ){ unversioned_schema(); db_multi_exec( "CREATE TEMP TABLE uv_tosend(" " name TEXT PRIMARY KEY," /* Name of file to send client->server */ " mtimeOnly BOOLEAN" /* True to only send mtime, not content */ ") WITHOUT ROWID;" "INSERT INTO uv_toSend(name,mtimeOnly)" " SELECT name, 0 FROM unversioned WHERE hash IS NOT NULL;" ); } /* ** Always begin with a clone, pull, or push message */ if( syncFlags & SYNC_CLONE ){ blob_appendf(&send, "clone 3 %d\n", cloneSeqno); syncFlags &= ~(SYNC_PUSH|SYNC_PULL); |
︙ | ︙ | |||
1512 1513 1514 1515 1516 1517 1518 | db_begin_transaction(); db_record_repository_filename(0); db_multi_exec( "CREATE TEMP TABLE onremote(rid INTEGER PRIMARY KEY);" ); manifest_crosslink_begin(); | | | 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 | db_begin_transaction(); db_record_repository_filename(0); db_multi_exec( "CREATE TEMP TABLE onremote(rid INTEGER PRIMARY KEY);" ); manifest_crosslink_begin(); /* Send back the most recently received cookie. Let the server ** figure out if this is a cookie that it cares about. */ zCookie = db_get("cookie", 0); if( zCookie ){ blob_appendf(&send, "cookie %s\n", zCookie); } |
︙ | ︙ | |||
1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 | ){ int overwrite = (configRcvMask & CONFIGSET_OVERWRITE)!=0; configure_prepare_to_receive(overwrite); } origConfigRcvMask = configRcvMask; configRcvMask = 0; } /* Send configuration parameters being pushed */ if( configSendMask ){ if( zOpType==0 ) zOpType = "Push"; if( configSendMask & CONFIGSET_OLDFORMAT ){ const char *zName; zName = configure_first_name(configSendMask); while( zName ){ send_legacy_config_card(&xfer, zName); zName = configure_next_name(configSendMask); nCardSent++; } }else{ nCardSent += configure_send_group(xfer.pOut, configSendMask, 0); } configSendMask = 0; } /* Append randomness to the end of the message. This makes all ** messages unique so that that the login-card nonce will always ** be unique. */ zRandomness = db_text(0, "SELECT hex(randomblob(20))"); blob_appendf(&send, "# %s\n", zRandomness); | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 | ){ int overwrite = (configRcvMask & CONFIGSET_OVERWRITE)!=0; configure_prepare_to_receive(overwrite); } origConfigRcvMask = configRcvMask; configRcvMask = 0; } /* Send a request to sync unversioned files. On a clone, delay sending ** this until the second cycle since the login card might fail on ** the first cycle. */ if( (syncFlags & SYNC_UNVERSIONED)!=0 && ((syncFlags & SYNC_CLONE)==0 || nCycle>0) && !uvHashSent ){ blob_appendf(&send, "pragma uv-hash %s\n", unversioned_content_hash(0)); nCardSent++; uvHashSent = 1; } /* Send configuration parameters being pushed */ if( configSendMask ){ if( zOpType==0 ) zOpType = "Push"; if( configSendMask & CONFIGSET_OLDFORMAT ){ const char *zName; zName = configure_first_name(configSendMask); while( zName ){ send_legacy_config_card(&xfer, zName); zName = configure_next_name(configSendMask); nCardSent++; } }else{ nCardSent += configure_send_group(xfer.pOut, configSendMask, 0); } configSendMask = 0; } /* Send unversioned files present here on the client but missing or ** obsolete on the server. ** ** Or, if the SYNC_UV_REVERT flag is set, delete the local unversioned ** files that do not exist on the server. ** ** This happens on the second exchange, since we do not know what files ** need to be sent until after the uvigot cards from the first exchange ** have been processed. */ if( uvDoPush ){ assert( (syncFlags & SYNC_UNVERSIONED)!=0 ); if( syncFlags & SYNC_UV_DRYRUN ){ uvDoPush = 0; }else if( syncFlags & SYNC_UV_REVERT ){ db_multi_exec( "DELETE FROM unversioned" " WHERE name IN (SELECT name FROM uv_tosend);" "DELETE FROM uv_tosend;" ); uvDoPush = 0; }else{ Stmt uvq; int rc = SQLITE_OK; db_prepare(&uvq, "SELECT name, mtimeOnly FROM uv_tosend"); while( (rc = db_step(&uvq))==SQLITE_ROW ){ const char *zName = db_column_text(&uvq, 0); send_unversioned_file(&xfer, zName, db_column_int(&uvq,1)); nCardSent++; nArtifactSent++; db_multi_exec("DELETE FROM uv_tosend WHERE name=%Q", zName); if( syncFlags & SYNC_VERBOSE ){ fossil_print("\rUnversioned-file sent: %s\n", zName); } if( blob_size(xfer.pOut)>xfer.mxSend ) break; } db_finalize(&uvq); if( rc==SQLITE_DONE ) uvDoPush = 0; } } /* Append randomness to the end of the message. This makes all ** messages unique so that that the login-card nonce will always ** be unique. */ zRandomness = db_text(0, "SELECT hex(randomblob(20))"); blob_appendf(&send, "# %s\n", zRandomness); |
︙ | ︙ | |||
1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 | nCardSent++; } if( syncFlags & SYNC_PUSH ){ blob_appendf(&send, "push %s %s\n", zSCode, zPCode); nCardSent++; } go = 0; /* Process the reply that came back from the server */ while( blob_line(&recv, &xfer.line) ){ if( blob_buffer(&xfer.line)[0]=='#' ){ const char *zLine = blob_buffer(&xfer.line); if( memcmp(zLine, "# timestamp ", 12)==0 ){ char zTime[20]; double rDiff; sqlite3_snprintf(sizeof(zTime), zTime, "%.19s", &zLine[12]); rDiff = db_double(9e99, "SELECT julianday('%q') - %.17g", zTime, rArrivalTime); if( rDiff>9e98 || rDiff<-9e98 ) rDiff = 0.0; | > > | > > | 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 | nCardSent++; } if( syncFlags & SYNC_PUSH ){ blob_appendf(&send, "push %s %s\n", zSCode, zPCode); nCardSent++; } go = 0; nUvGimmeSent = 0; nUvFileRcvd = 0; /* Process the reply that came back from the server */ while( blob_line(&recv, &xfer.line) ){ if( blob_buffer(&xfer.line)[0]=='#' ){ const char *zLine = blob_buffer(&xfer.line); if( memcmp(zLine, "# timestamp ", 12)==0 ){ char zTime[20]; double rDiff; sqlite3_snprintf(sizeof(zTime), zTime, "%.19s", &zLine[12]); rDiff = db_double(9e99, "SELECT julianday('%q') - %.17g", zTime, rArrivalTime); if( rDiff>9e98 || rDiff<-9e98 ) rDiff = 0.0; if( rDiff*24.0*3600.0 >= -(blob_size(&recv)/5000.0 + 20) ){ rDiff = 0.0; } if( fossil_fabs(rDiff)>fossil_fabs(rSkew) ) rSkew = rDiff; } nCardRcvd++; continue; } xfer.nToken = blob_tokenize(&xfer.line, xfer.aToken, count(xfer.aToken)); nCardRcvd++; |
︙ | ︙ | |||
1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 | ** ** Receive a compressed file transmitted from the server. */ if( blob_eq(&xfer.aToken[0],"cfile") ){ xfer_accept_compressed_file(&xfer, 0, 0); nArtifactRcvd++; }else /* gimme UUID ** ** Server is requesting a file. If the file is a manifest, assume ** that the server will also want to know all of the content files ** associated with the manifest and send those too. */ | > > > > > > > > > > > > > > | 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 | ** ** Receive a compressed file transmitted from the server. */ if( blob_eq(&xfer.aToken[0],"cfile") ){ xfer_accept_compressed_file(&xfer, 0, 0); nArtifactRcvd++; }else /* uvfile NAME MTIME HASH SIZE FLAGS \n CONTENT ** ** Accept an unversioned file from the client. */ if( blob_eq(&xfer.aToken[0], "uvfile") ){ xfer_accept_unversioned_file(&xfer, 1); nArtifactRcvd++; nUvFileRcvd++; if( syncFlags & SYNC_VERBOSE ){ fossil_print("\rUnversioned-file received: %s\n", blob_str(&xfer.aToken[1])); } }else /* gimme UUID ** ** Server is requesting a file. If the file is a manifest, assume ** that the server will also want to know all of the content files ** associated with the manifest and send those too. */ |
︙ | ︙ | |||
1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 | }else if( (syncFlags & (SYNC_PULL|SYNC_CLONE))!=0 ){ rid = content_new(blob_str(&xfer.aToken[1]), isPriv); if( rid ) newPhantom = 1; } remote_has(rid); }else /* push SERVERCODE PRODUCTCODE ** ** Should only happen in response to a clone. This message tells ** the client what product to use for the new database. */ if( blob_eq(&xfer.aToken[0],"push") | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 | }else if( (syncFlags & (SYNC_PULL|SYNC_CLONE))!=0 ){ rid = content_new(blob_str(&xfer.aToken[1]), isPriv); if( rid ) newPhantom = 1; } remote_has(rid); }else /* uvigot NAME MTIME HASH SIZE ** ** Server announces that it has a particular unversioned file. The ** server will only send this card if the client had previously sent ** a "pragma uv-hash" card with a hash that does not match. ** ** If the identified file needs to be transferred, then setup for the ** transfer. Generate a "uvgimme" card in the reply if the server ** version is newer than the client. Generate a "uvfile" card if ** the client version is newer than the server. If HASH is "-" ** (indicating that the file has been deleted) and MTIME is newer, ** then do the deletion. */ if( xfer.nToken==5 && blob_eq(&xfer.aToken[0], "uvigot") && blob_is_filename(&xfer.aToken[1]) && blob_is_int64(&xfer.aToken[2], &mtime) && blob_is_int(&xfer.aToken[4], &size) && (blob_eq(&xfer.aToken[3],"-") || blob_is_uuid(&xfer.aToken[3])) ){ const char *zName = blob_str(&xfer.aToken[1]); const char *zHash = blob_str(&xfer.aToken[3]); int iStatus; iStatus = unversioned_status(zName, mtime, zHash); if( (syncFlags & SYNC_UV_REVERT)!=0 ){ if( iStatus==4 ) iStatus = 2; if( iStatus==5 ) iStatus = 1; } if( syncFlags & (SYNC_UV_TRACE|SYNC_UV_DRYRUN) ){ const char *zMsg = 0; switch( iStatus ){ case 0: case 1: zMsg = "UV-PULL"; break; case 2: zMsg = "UV-PULL-MTIME-ONLY"; break; case 4: zMsg = "UV-PUSH-MTIME-ONLY"; break; case 5: zMsg = "UV-PUSH"; break; } if( zMsg ) fossil_print("\r%s: %s\n", zMsg, zName); if( syncFlags & SYNC_UV_DRYRUN ){ iStatus = 99; /* Prevent any changes or reply messages */ } } if( iStatus<=1 ){ if( zHash[0]!='-' ){ blob_appendf(xfer.pOut, "uvgimme %s\n", zName); nCardSent++; nUvGimmeSent++; db_multi_exec("DELETE FROM unversioned WHERE name=%Q", zName); }else if( iStatus==1 ){ db_multi_exec( "UPDATE unversioned" " SET mtime=%lld, hash=NULL, sz=0, encoding=0, content=NULL" " WHERE name=%Q", mtime, zName ); db_unset("uv-hash", 0); } }else if( iStatus==2 ){ db_multi_exec( "UPDATE unversioned SET mtime=%lld WHERE name=%Q", mtime, zName ); db_unset("uv-hash", 0); } if( iStatus<=3 ){ db_multi_exec("DELETE FROM uv_tosend WHERE name=%Q", zName); }else if( iStatus==4 ){ db_multi_exec("UPDATE uv_tosend SET mtimeOnly=1 WHERE name=%Q",zName); }else if( iStatus==5 ){ db_multi_exec("REPLACE INTO uv_tosend(name,mtimeOnly) VALUES(%Q,0)", zName); } }else /* push SERVERCODE PRODUCTCODE ** ** Should only happen in response to a clone. This message tells ** the client what product to use for the new database. */ if( blob_eq(&xfer.aToken[0],"push") |
︙ | ︙ | |||
1814 1815 1816 1817 1818 1819 1820 | ** ** If the "login failed" message is seen, clear the sync password prior ** to the next cycle. */ if( blob_eq(&xfer.aToken[0],"message") && xfer.nToken==2 ){ char *zMsg = blob_terminate(&xfer.aToken[1]); defossilize(zMsg); | | > > > > > > > > > > > > | 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 | ** ** If the "login failed" message is seen, clear the sync password prior ** to the next cycle. */ if( blob_eq(&xfer.aToken[0],"message") && xfer.nToken==2 ){ char *zMsg = blob_terminate(&xfer.aToken[1]); defossilize(zMsg); if( (syncFlags & SYNC_PUSH) && zMsg && sqlite3_strglob("pull only *", zMsg)==0 ){ syncFlags &= ~SYNC_PUSH; zMsg = 0; } if( zMsg && zMsg[0] ){ fossil_force_newline(); fossil_print("Server says: %s\n", zMsg); } }else /* pragma NAME VALUE... ** ** The server can send pragmas to try to convey meta-information to ** the client. These are informational only. Unknown pragmas are ** silently ignored. */ if( blob_eq(&xfer.aToken[0], "pragma") && xfer.nToken>=2 ){ /* If the server is unwill to accept new unversioned content (because ** this client lacks the necessary permissions) then it sends a ** "uv-pull-only" pragma so that the client will know not to waste ** bandwidth trying to upload unversioned content. If the server ** does accept new unversioned content, it sends "uv-push-ok". */ if( blob_eq(&xfer.aToken[1], "uv-pull-only") ){ if( syncFlags & SYNC_UV_REVERT ) uvDoPush = 1; }else if( blob_eq(&xfer.aToken[1], "uv-push-ok") ){ uvDoPush = 1; } }else /* error MESSAGE ** ** Report an error and abandon the sync session. ** ** Except, when cloning we will sometimes get an error on the |
︙ | ︙ | |||
1929 1930 1931 1932 1933 1934 1935 | xfer.nFileRcvd = 0; xfer.nDeltaRcvd = 0; xfer.nDanglingFile = 0; /* If we have one or more files queued to send, then go ** another round */ | | > > > > > | 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 | xfer.nFileRcvd = 0; xfer.nDeltaRcvd = 0; xfer.nDanglingFile = 0; /* If we have one or more files queued to send, then go ** another round */ if( xfer.nFileSent+xfer.nDeltaSent>0 || uvDoPush ){ go = 1; } /* If this is a clone, the go at least two rounds */ if( (syncFlags & SYNC_CLONE)!=0 && nCycle==1 ) go = 1; /* Stop the cycle if the server sends a "clone_seqno 0" card and ** we have gone at least two rounds. Always go at least two rounds ** on a clone in order to be sure to retrieve the configuration ** information which is only sent on the second round. */ if( cloneSeqno<=0 && nCycle>1 ) go = 0; /* Continue looping as long as new uvfile cards are being received ** and uvgimme cards are being sent. */ if( nUvGimmeSent>0 && (nUvFileRcvd>0 || nCycle<3) ) go = 1; db_multi_exec("DROP TABLE onremote"); if( go ){ manifest_crosslink_end(MC_PERMIT_HOOKS); }else{ manifest_crosslink_end(MC_PERMIT_HOOKS); content_enable_dephantomize(1); } |
︙ | ︙ |
Changes to src/xfersetup.c.
︙ | ︙ | |||
59 60 61 62 63 64 65 | }else{ syncFlags = SYNC_PUSH | SYNC_PULL; zButton = "Synchronize"; zWarning = mprintf("WARNING: Pushing to \"%s\" is enabled.", g.url.canonical); } @ <p>Press the <strong>%h(zButton)</strong> button below to | | | 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 | }else{ syncFlags = SYNC_PUSH | SYNC_PULL; zButton = "Synchronize"; zWarning = mprintf("WARNING: Pushing to \"%s\" is enabled.", g.url.canonical); } @ <p>Press the <strong>%h(zButton)</strong> button below to @ synchronize with the <em>%h(g.url.canonical)</em> repository now.<br /> @ This may be useful when testing the various transfer scripts.</p> @ <p>You can use the <code>http -async</code> command in your scripts, but @ make sure the <code>th1-uri-regexp</code> setting is set first.</p> if( zWarning ){ @ @ <big><b>%h(zWarning)</b></big> free(zWarning); |
︙ | ︙ |
Changes to src/zip.c.
︙ | ︙ | |||
307 308 309 310 311 312 313 | /* ** Given the RID for a manifest, construct a ZIP archive containing ** all files in the corresponding baseline. ** ** If RID is for an object that is not a real manifest, then the ** resulting ZIP archive contains a single file which is the RID | | | > > > > > > | 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 | /* ** Given the RID for a manifest, construct a ZIP archive containing ** all files in the corresponding baseline. ** ** If RID is for an object that is not a real manifest, then the ** resulting ZIP archive contains a single file which is the RID ** object. The pInclude and pExclude parameters are ignored in this case. ** ** If the RID object does not exist in the repository, then ** pZip is zeroed. ** ** zDir is a "synthetic" subdirectory which all zipped files get ** added to as part of the zip file. It may be 0 or an empty string, ** in which case it is ignored. The intention is to create a zip which ** politely expands into a subdir instead of filling your current dir ** with source files. For example, pass a UUID or "ProjectName". ** */ void zip_of_checkin( int rid, /* The RID of the checkin to construct the ZIP archive from */ Blob *pZip, /* Write the ZIP archive content into this blob */ const char *zDir, /* Top-level directory of the ZIP archive */ Glob *pInclude, /* Only include files that match this pattern */ Glob *pExclude /* Exclude files that match this pattern */ ){ Blob mfile, hash, file; Manifest *pManifest; ManifestFile *pFile; Blob filename; int nPrefix; content_get(rid, &mfile); |
︙ | ︙ | |||
342 343 344 345 346 347 348 | if( zDir && zDir[0] ){ blob_appendf(&filename, "%s/", zDir); } nPrefix = blob_size(&filename); pManifest = manifest_get(rid, CFTYPE_MANIFEST, 0); if( pManifest ){ | > | > > > > > > > > > > > > > > > > > > | > > | | | > > | > > | | > > > | | | | > | | > > > > > > > > > > > > > > > | | 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 | if( zDir && zDir[0] ){ blob_appendf(&filename, "%s/", zDir); } nPrefix = blob_size(&filename); pManifest = manifest_get(rid, CFTYPE_MANIFEST, 0); if( pManifest ){ int flg, eflg = 0; char *zName = 0; zip_set_timedate(pManifest->rDate); flg = db_get_manifest_setting(); if( flg ){ /* eflg is the effective flags, taking include/exclude into account */ if( (pInclude==0 || glob_match(pInclude, "manifest")) && !glob_match(pExclude, "manifest") && (flg & MFESTFLG_RAW) ){ eflg |= MFESTFLG_RAW; } if( (pInclude==0 || glob_match(pInclude, "manifest.uuid")) && !glob_match(pExclude, "manifest.uuid") && (flg & MFESTFLG_UUID) ){ eflg |= MFESTFLG_UUID; } if( (pInclude==0 || glob_match(pInclude, "manifest.tags")) && !glob_match(pExclude, "manifest.tags") && (flg & MFESTFLG_TAGS) ){ eflg |= MFESTFLG_TAGS; } if( eflg & (MFESTFLG_RAW|MFESTFLG_UUID) ){ if( eflg & MFESTFLG_RAW ){ blob_append(&filename, "manifest", -1); zName = blob_str(&filename); zip_add_folders(zName); } if( eflg & MFESTFLG_UUID ){ sha1sum_blob(&mfile, &hash); } if( eflg & MFESTFLG_RAW ){ sterilize_manifest(&mfile); zip_add_file(zName, &mfile, 0); } } blob_reset(&mfile); if( eflg & MFESTFLG_UUID ){ blob_append(&hash, "\n", 1); blob_resize(&filename, nPrefix); blob_append(&filename, "manifest.uuid", -1); zName = blob_str(&filename); zip_add_folders(zName); zip_add_file(zName, &hash, 0); blob_reset(&hash); } if( eflg & MFESTFLG_TAGS ){ Blob tagslist; blob_zero(&tagslist); get_checkin_taglist(rid, &tagslist); blob_resize(&filename, nPrefix); blob_append(&filename, "manifest.tags", -1); zName = blob_str(&filename); zip_add_folders(zName); zip_add_file(zName, &tagslist, 0); blob_reset(&tagslist); } } manifest_file_rewind(pManifest); while( (pFile = manifest_file_next(pManifest,0))!=0 ){ int fid; if( pInclude!=0 && !glob_match(pInclude, pFile->zName) ) continue; if( glob_match(pExclude, pFile->zName) ) continue; fid = uuid_to_rid(pFile->zUuid, 0); if( fid ){ content_get(fid, &file); blob_resize(&filename, nPrefix); blob_append(&filename, pFile->zName, -1); zName = blob_str(&filename); zip_add_folders(zName); zip_add_file(zName, &file, manifest_file_mperm(pFile)); |
︙ | ︙ | |||
383 384 385 386 387 388 389 | blob_reset(&filename); zip_close(pZip); } /* ** COMMAND: zip* ** | | | | > > > > > > > > > > > | > > > > > > > > | > > > > > | > > > | | | | | < > > | < < > | > > > > > > > | > > > > > | < > > > > | > > > > > > > > > > > > > > > > > > > > > > > > < < | | > | | > | 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 | blob_reset(&filename); zip_close(pZip); } /* ** COMMAND: zip* ** ** Usage: %fossil zip VERSION OUTPUTFILE [OPTIONS] ** ** Generate a ZIP archive for a check-in. If the --name option is ** used, its argument becomes the name of the top-level directory in the ** resulting ZIP archive. If --name is omitted, the top-level directory ** name is derived from the project name, the check-in date and time, and ** the artifact ID of the check-in. ** ** The GLOBLIST argument to --exclude and --include can be a comma-separated ** list of glob patterns, where each glob pattern may optionally be enclosed ** in "..." or '...' so that it may contain commas. If a file matches both ** --include and --exclude then it is excluded. ** ** Options: ** -X|--exclude GLOBLIST Comma-separated list of GLOBs of files to exclude ** --include GLOBLIST Comma-separated list of GLOBs of files to include ** --name DIRECTORYNAME The name of the top-level directory in the archive ** -R REPOSITORY Specify a Fossil repository */ void zip_cmd(void){ int rid; Blob zip; const char *zName; Glob *pInclude = 0; Glob *pExclude = 0; const char *zInclude; const char *zExclude; zName = find_option("name", 0, 1); zExclude = find_option("exclude", "X", 1); if( zExclude ) pExclude = glob_create(zExclude); zInclude = find_option("include", 0, 1); if( zInclude ) pInclude = glob_create(zInclude); db_find_and_open_repository(0, 0); /* We should be done with options.. */ verify_all_options(); if( g.argc!=4 ){ usage("VERSION OUTPUTFILE"); } rid = name_to_typed_rid(g.argv[2], "ci"); if( rid==0 ){ fossil_fatal("Check-in not found: %s", g.argv[2]); return; } if( zName==0 ){ zName = db_text("default-name", "SELECT replace(%Q,' ','_') " " || strftime('_%%Y-%%m-%%d_%%H%%M%%S_', event.mtime) " " || substr(blob.uuid, 1, 10)" " FROM event, blob" " WHERE event.objid=%d" " AND blob.rid=%d", db_get("project-name", "unnamed"), rid, rid ); } zip_of_checkin(rid, &zip, zName, pInclude, pExclude); glob_free(pInclude); glob_free(pExclude); blob_write_to_file(&zip, g.argv[3]); blob_reset(&zip); } /* ** WEBPAGE: zip ** URL: /zip ** ** Generate a ZIP archive for the check-in specified by the "uuid" ** query parameter. Return that ZIP archive as the HTTP reply content. ** ** Query parameters: ** ** name=NAME[.zip] The base name of the output file. The default ** value is a configuration parameter in the project ** settings. A prefix of the name, omitting the ** extension, is used as the top-most directory name. ** ** uuid=TAG The check-in that is turned into a ZIP archive. ** Defaults to "trunk". ** ** in=PATTERN Only include files that match the comma-separate ** list of GLOB patterns in PATTERN, as with ex= ** ** ex=PATTERN Omit any file that match PATTERN. PATTERN is a ** comma-separated list of GLOB patterns, where each ** pattern can optionally be quoted using ".." or '..'. ** Any file matching both ex= and in= is excluded. */ void baseline_zip_page(void){ int rid; char *zName, *zRid, *zKey; int nName, nRid; const char *zInclude; /* The in= query parameter */ const char *zExclude; /* The ex= query parameter */ Blob cacheKey; /* The key to cache */ Glob *pInclude = 0; /* The compiled in= glob pattern */ Glob *pExclude = 0; /* The compiled ex= glob pattern */ Blob zip; /* ZIP archive accumulated here */ login_check_credentials(); if( !g.perm.Zip ){ login_needed(g.anon.Zip); return; } load_control(); zName = mprintf("%s", PD("name","")); nName = strlen(zName); zRid = mprintf("%s", PD("uuid","trunk")); nRid = strlen(zRid); zInclude = P("in"); if( zInclude ) pInclude = glob_create(zInclude); zExclude = P("ex"); if( zExclude ) pExclude = glob_create(zExclude); if( nName>4 && fossil_strcmp(&zName[nName-4], ".zip")==0 ){ /* Special case: Remove the ".zip" suffix. */ nName -= 4; zName[nName] = 0; }else{ /* If the file suffix is not ".zip" then just remove the ** suffix up to and including the last "." */ for(nName=strlen(zName)-1; nName>5; nName--){ if( zName[nName]=='.' ){ zName[nName] = 0; break; } } } rid = name_to_typed_rid(nRid?zRid:zName, "ci"); if( rid==0 ){ @ Not found return; } if( nRid==0 && nName>10 ) zName[10] = 0; /* Compute a unique key for the cache entry based on query parameters */ blob_init(&cacheKey, 0, 0); blob_appendf(&cacheKey, "/zip/%z", rid_to_uuid(rid)); blob_appendf(&cacheKey, "/%q", zName); if( zInclude ) blob_appendf(&cacheKey, ",in=%Q", zInclude); if( zExclude ) blob_appendf(&cacheKey, ",ex=%Q", zExclude); zKey = blob_str(&cacheKey); if( P("debug")!=0 ){ style_header("ZIP Archive Generator Debug Screen"); @ zName = "%h(zName)"<br /> @ rid = %d(rid)<br /> if( zInclude ){ @ zInclude = "%h(zInclude)"<br /> } if( zExclude ){ @ zExclude = "%h(zExclude)"<br /> } @ zKey = "%h(zKey)" style_footer(); return; } if( referred_from_login() ){ style_header("ZIP Archive Download"); @ <form action='%R/zip/%h(zName).zip'> cgi_query_parameters_to_hidden(); @ <p>ZIP Archive named <b>%h(zName).zip</b> holding the content @ of check-in <b>%h(zRid)</b>: @ <input type="submit" value="Download" /> @ </form> style_footer(); return; } blob_zero(&zip); if( cache_read(&zip, zKey)==0 ){ zip_of_checkin(rid, &zip, zName, pInclude, pExclude); cache_write(&zip, zKey); } glob_free(pInclude); glob_free(pExclude); fossil_free(zName); fossil_free(zRid); blob_reset(&cacheKey); cgi_set_content(&zip); cgi_set_content_type("application/zip"); } |
Changes to test/amend.test.
︙ | ︙ | |||
257 258 259 260 261 262 263 | test amend-close-1.1.a {[string match "*uuid:*$UUIDC*" $RESULT]} test amend-close-1.1.b { [string match "*comment:*Create*new*branch*named*\"cllf\"*" $RESULT] } fossil tag ls --raw $UUIDC test amend-close-1.2 {[string first "closed" $RESULT] != -1} fossil timeline -n 1 | | | 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 | test amend-close-1.1.a {[string match "*uuid:*$UUIDC*" $RESULT]} test amend-close-1.1.b { [string match "*comment:*Create*new*branch*named*\"cllf\"*" $RESULT] } fossil tag ls --raw $UUIDC test amend-close-1.2 {[string first "closed" $RESULT] != -1} fossil timeline -n 1 test amend-close-1.3 {[string match {*Mark*"Closed".*} $RESULT]} write_file datafile "cllf" fossil commit -m "should fail" -expectError test amend-close-2 {[string first "closed leaf" $RESULT] != -1} set UUID3 UUID3 fossil revert fossil update trunk |
︙ | ︙ | |||
302 303 304 305 306 307 308 | incr tc set tags {} set cancels {} set t1exp "" set t2exp "*" set t3exp "*" set t5exp "*" | | | 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 | incr tc set tags {} set cancels {} set t1exp "" set t2exp "*" set t3exp "*" set t5exp "*" foreach tag $tagt { lappend tags -tag $tag lappend cancels -cancel $tag } foreach res $result { append t1exp ", $res" append t2exp "sym-$res*" append t3exp "Add*tag*\"$res\".*" |
︙ | ︙ |
Added test/commit-warning.test.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 | # # Copyright (c) 2016 D. Richard Hipp # # This program is free software; you can redistribute it and/or # modify it under the terms of the Simplified BSD License (also # known as the "2-Clause License" or "FreeBSD License".) # # This program is distributed in the hope that it will be useful, # but without any warranty; without even the implied warranty of # merchantability or fitness for a particular purpose. # # Author contact information: # drh@hwaci.com # http://www.hwaci.com/drh/ # ############################################################################ # # The focus of this file is to test pre-commit warnings. # test_setup "" ############################################################################### run_in_checkout { fossil test-commit-warning } test pre-commit-warnings-1 {[normalize_result] eq \ [subst -nocommands -novariables [string trim { 1\tart/branching.odp\tbinary data 1\tart/concept1.dia\tbinary data 1\tart/concept2.dia\tbinary data 1\tcompat/zlib/ChangeLog\tinvalid UTF-8 1\tcompat/zlib/contrib/README.contrib\tinvalid UTF-8 1\tcompat/zlib/contrib/blast/test.pk\tbinary data 1\tcompat/zlib/contrib/dotzlib/DotZLib.build\tCR/NL line endings 1\tcompat/zlib/contrib/dotzlib/DotZLib.chm\tbinary data 1\tcompat/zlib/contrib/dotzlib/DotZLib.sln\tCR/NL line endings 1\tcompat/zlib/contrib/dotzlib/DotZLib/AssemblyInfo.cs\tCR/NL line endings 1\tcompat/zlib/contrib/dotzlib/DotZLib/DotZLib.csproj\tCR/NL line endings 1\tcompat/zlib/contrib/dotzlib/DotZLib/UnitTests.cs\tCR/NL line endings 1\tcompat/zlib/contrib/dotzlib/LICENSE_1_0.txt\tCR/NL line endings 1\tcompat/zlib/contrib/dotzlib/readme.txt\tCR/NL line endings 1\tcompat/zlib/contrib/gcc_gvmat64/gvmat64.S\tCR/NL line endings 1\tcompat/zlib/contrib/masmx64/bld_ml64.bat\tCR/NL line endings 1\tcompat/zlib/contrib/masmx64/gvmat64.asm\tCR/NL line endings 1\tcompat/zlib/contrib/masmx64/inffas8664.c\tCR/NL line endings 1\tcompat/zlib/contrib/masmx64/inffasx64.asm\tCR/NL line endings 1\tcompat/zlib/contrib/masmx64/readme.txt\tCR/NL line endings 1\tcompat/zlib/contrib/masmx86/bld_ml32.bat\tCR/NL line endings 1\tcompat/zlib/contrib/masmx86/inffas32.asm\tCR/NL line endings 1\tcompat/zlib/contrib/masmx86/match686.asm\tCR/NL line endings 1\tcompat/zlib/contrib/masmx86/readme.txt\tCR/NL line endings 1\tcompat/zlib/contrib/puff/zeros.raw\tbinary data 1\tcompat/zlib/contrib/testzlib/testzlib.c\tCR/NL line endings 1\tcompat/zlib/contrib/testzlib/testzlib.txt\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/readme.txt\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc10/miniunz.vcxproj\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc10/miniunz.vcxproj.filters\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc10/miniunz.vcxproj.user\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc10/minizip.vcxproj\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc10/minizip.vcxproj.filters\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc10/minizip.vcxproj.user\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc10/testzlib.vcxproj\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc10/testzlib.vcxproj.filters\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc10/testzlib.vcxproj.user\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc10/testzlibdll.vcxproj\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc10/testzlibdll.vcxproj.filters\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc10/testzlibdll.vcxproj.user\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc10/zlib.rc\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc10/zlibstat.vcxproj\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc10/zlibstat.vcxproj.filters\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc10/zlibstat.vcxproj.user\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc10/zlibvc.def\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc10/zlibvc.sln\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc10/zlibvc.vcxproj\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc10/zlibvc.vcxproj.filters\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc10/zlibvc.vcxproj.user\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc11/miniunz.vcxproj\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc11/minizip.vcxproj\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc11/testzlib.vcxproj\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc11/testzlibdll.vcxproj\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc11/zlib.rc\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc11/zlibstat.vcxproj\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc11/zlibvc.def\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc11/zlibvc.sln\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc11/zlibvc.vcxproj\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc9/miniunz.vcproj\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc9/minizip.vcproj\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc9/testzlib.vcproj\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc9/testzlibdll.vcproj\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc9/zlib.rc\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc9/zlibstat.vcproj\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc9/zlibvc.def\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc9/zlibvc.sln\tCR/NL line endings 1\tcompat/zlib/contrib/vstudio/vc9/zlibvc.vcproj\tCR/NL line endings 1\tcompat/zlib/zlib.3.pdf\tbinary data 1\tsetup/fossil.iss\tCR/NL line endings 1\tskins/blitz/arrow_project.png\tbinary data 1\tskins/blitz/dir.png\tbinary data 1\tskins/blitz/file.png\tbinary data 1\tskins/blitz/fossil_100.png\tbinary data 1\tskins/blitz/fossil_80_reversed_darkcyan.png\tbinary data 1\tskins/blitz/fossil_80_reversed_darkcyan_text.png\tbinary data 1\tskins/blitz/rss_20.png\tbinary data 1\ttest/th1-docs-input.txt\tCR/NL line endings 1\ttest/th1-hooks-input.txt\tCR/NL line endings 1\ttest/utf16be.txt\tUnicode 1\ttest/utf16le.txt\tUnicode 1\twin/buildmsvc.bat\tCR/NL line endings 1\twin/fossil.ico\tbinary data 1\twww/CollRev1.gif\tbinary data 1\twww/CollRev2.gif\tbinary data 1\twww/CollRev3.gif\tbinary data 1\twww/CollRev4.gif\tbinary data 1\twww/apple-touch-icon.png\tbinary data 1\twww/background.jpg\tbinary data 1\twww/branch01.gif\tbinary data 1\twww/branch02.gif\tbinary data 1\twww/branch03.gif\tbinary data 1\twww/branch04.gif\tbinary data 1\twww/branch05.gif\tbinary data 1\twww/build-icons/linux.gif\tbinary data 1\twww/build-icons/linux64.gif\tbinary data 1\twww/build-icons/mac.gif\tbinary data 1\twww/build-icons/openbsd.gif\tbinary data 1\twww/build-icons/src.gif\tbinary data 1\twww/build-icons/win32.gif\tbinary data 1\twww/concept1.gif\tbinary data 1\twww/concept2.gif\tbinary data 1\twww/copyright-release.pdf\tbinary data 1\twww/delta1.gif\tbinary data 1\twww/delta2.gif\tbinary data 1\twww/delta3.gif\tbinary data 1\twww/delta4.gif\tbinary data 1\twww/delta5.gif\tbinary data 1\twww/delta6.gif\tbinary data 1\twww/encode1.gif\tbinary data 1\twww/encode10.gif\tbinary data 1\twww/encode2.gif\tbinary data 1\twww/encode3.gif\tbinary data 1\twww/encode4.gif\tbinary data 1\twww/encode5.gif\tbinary data 1\twww/encode6.gif\tbinary data 1\twww/encode7.gif\tbinary data 1\twww/encode8.gif\tbinary data 1\twww/encode9.gif\tbinary data 1\twww/fossil.gif\tbinary data 1\twww/fossil2.gif\tbinary data 1\twww/fossil3.gif\tbinary data 1\twww/fossil_logo_small.gif\tbinary data 1\twww/fossil_logo_small2.gif\tbinary data 1\twww/fossil_logo_small3.gif\tbinary data 1\twww/xkcd-git.gif\tbinary data 1}]]} ############################################################################### test_cleanup |
Changes to test/delta1.test.
︙ | ︙ | |||
32 33 34 35 36 37 38 | if {[file isdir $f]} continue set base [file root [file tail $f]] set f1 [read_file $f] write_file t1 $f1 for {set i 0} {$i<100} {incr i} { write_file t2 [random_changes $f1 1 1 0 0.1] fossil test-delta t1 t2 | | | | > > > > > > > > > | 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 | if {[file isdir $f]} continue set base [file root [file tail $f]] set f1 [read_file $f] write_file t1 $f1 for {set i 0} {$i<100} {incr i} { write_file t2 [random_changes $f1 1 1 0 0.1] fossil test-delta t1 t2 test delta-$base-$i-1 {[normalize_result]=="ok"} write_file t2 [random_changes $f1 1 1 0 0.2] fossil test-delta t1 t2 test delta-$base-$i-2 {[normalize_result]=="ok"} write_file t2 [random_changes $f1 1 1 0 0.4] fossil test-delta t1 t2 test delta-$base-$i-3 {[normalize_result]=="ok"} } } set empties { "" "" "" a a "" } set i 0 foreach {f1 f2} $empties { incr i write_file t1 $f1 write_file t2 $f2 fossil test-delta t1 t2 test delta-empty-$i {[normalize_result]=="ok"} } ############################################################################### test_cleanup |
Added test/diff.test.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 | # # Copyright (c) 2016 D. Richard Hipp # # This program is free software; you can redistribute it and/or # modify it under the terms of the Simplified BSD License (also # known as the "2-Clause License" or "FreeBSD License".) # # This program is distributed in the hope that it will be useful, # but without any warranty; without even the implied warranty of # merchantability or fitness for a particular purpose. # # Author contact information: # drh@hwaci.com # http://www.hwaci.com/drh/ # ############################################################################ # # Tests for the diff command. # require_no_open_checkout test_setup; set rootDir [file normalize [pwd]] ################################### # Tests of binary file detection. # ################################### file mkdir .fossil-settings write_file [file join .fossil-settings binary-glob] "*" write_file file0.dat ""; # no content. write_file file1.dat "test file 1 (one line no term)." write_file file2.dat "test file 2 (NUL character).\0" write_file file3.dat "test file 3 (long line).[string repeat x 16384]" write_file file4.dat "test file 4 (long line).[string repeat y 16384]\ntwo" write_file file5.dat "[string repeat z 16384]\ntest file 5 (long line)." fossil add $rootDir fossil commit -m "c1" ############################################################################### fossil ls test diff-ls-1 {[normalize_result] eq \ "file0.dat\nfile1.dat\nfile2.dat\nfile3.dat\nfile4.dat\nfile5.dat"} ############################################################################### write_file file0.dat "\0" fossil diff file0.dat test diff-file0-1 {[normalize_result] eq {Index: file0.dat ================================================================== --- file0.dat +++ file0.dat cannot compute difference between binary files}} ############################################################################### write_file file1.dat [string repeat z 16384] fossil diff file1.dat test diff-file1-1 {[normalize_result] eq {Index: file1.dat ================================================================== --- file1.dat +++ file1.dat cannot compute difference between binary files}} ############################################################################### write_file file2.dat "test file 2 (no NUL character)." fossil diff file2.dat test diff-file2-1 {[normalize_result] eq {Index: file2.dat ================================================================== --- file2.dat +++ file2.dat cannot compute difference between binary files}} ############################################################################### write_file file3.dat "test file 3 (not a long line)." fossil diff file3.dat test diff-file3-1 {[normalize_result] eq {Index: file3.dat ================================================================== --- file3.dat +++ file3.dat cannot compute difference between binary files}} ############################################################################### write_file file4.dat "test file 4 (not a long line).\ntwo" fossil diff file4.dat test diff-file4-1 {[normalize_result] eq {Index: file4.dat ================================================================== --- file4.dat +++ file4.dat cannot compute difference between binary files}} ############################################################################### write_file file5.dat "[string repeat 0 16]\ntest file 5 (not a long line)." fossil diff file5.dat test diff-file5-1 {[normalize_result] eq {Index: file5.dat ================================================================== --- file5.dat +++ file5.dat cannot compute difference between binary files}} ############################################################################### test_cleanup |
Added test/fake-editor.tcl.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 | # # Copyright (c) 2016 D. Richard Hipp # # This program is free software; you can redistribute it and/or # modify it under the terms of the Simplified BSD License (also # known as the "2-Clause License" or "FreeBSD License".) # # This program is distributed in the hope that it will be useful, # but without any warranty; without even the implied warranty of # merchantability or fitness for a particular purpose. # # Author contact information: # drh@hwaci.com # http://www.hwaci.com/drh/ # ############################################################################ # # This is a fake text editor for use by tests. To customize its behavior, # set the FAKE_EDITOR_SCRIPT environment variable prior to evaluating this # script file. If FAKE_EDITOR_SCRIPT environment variable is not set, the # default behavior will be used. The default behavior is to append the # process identifier and the current time, in seconds, to the file data. # if {![info exists argv] || [llength $argv] != 1} { error "Usage: \"[info nameofexecutable]\" \"[info script]\" <fileName>" } ############################################################################### proc makeBinaryChannel { channel } { fconfigure $channel -encoding binary -translation binary } proc readFile { fileName } { set channel [open $fileName RDONLY] makeBinaryChannel $channel set result [read $channel] close $channel return $result } proc writeFile { fileName data } { set channel [open $fileName {WRONLY CREAT TRUNC}] makeBinaryChannel $channel puts -nonewline $channel $data close $channel return "" } ############################################################################### set fileName [lindex $argv 0] if {[file exists $fileName]} then { set data [readFile $fileName] } else { set data "" } ############################################################################### if {[info exists env(FAKE_EDITOR_SCRIPT)]} { # # NOTE: If an error is caught while evaluating this script, catch # it and return, which will also skip writing the (possibly # modified) content back to the original file. # set script $env(FAKE_EDITOR_SCRIPT) set code [catch $script error] if {$code != 0} then { if {[info exists env(FAKE_EDITOR_VERBOSE)]} { if {[info exists errorInfo]} { puts stdout "ERROR ($code): $errorInfo" } else { puts stdout "ERROR ($code): $error" } } return } } else { # # NOTE: The default behavior is to append the process identifier # and the current time, in seconds, to the file data. # append data " " [pid] " " [clock seconds] } ############################################################################### writeFile $fileName $data |
Changes to test/graph-test-1.wiki.
︙ | ︙ | |||
64 65 66 67 68 69 70 71 72 73 74 75 76 77 | Merge on the same branch does not result in a leaf. </a> * <a href="../../../timeline?c=20015206bc" target="testwindow"> This timeline has a hidden commit.</a> Click Unhide to reveal. * <a href="../../../timeline?y=ci&n=15&b=2a4e4cf03e" target="testwindow">Isolated check-ins.</a> External: * <a href="http://www.sqlite.org/src/timeline?c=2010-09-29&nd" target="testwindow">Timewarp due to a mis-configured system clock.</a> * <a href="http://core.tcl.tk/tk/finfo?name=tests/id.test" target="testwindow">Show all three separate deletions of "id.test". | > > > | 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 | Merge on the same branch does not result in a leaf. </a> * <a href="../../../timeline?c=20015206bc" target="testwindow"> This timeline has a hidden commit.</a> Click Unhide to reveal. * <a href="../../../timeline?y=ci&n=15&b=2a4e4cf03e" target="testwindow">Isolated check-ins.</a> * <a href="../../../timeline?b=0fa60142&n=50" target="testwindow">Single branch raiser from bottom of page up to checkins 057e4b and d3cc6d</a> External: * <a href="http://www.sqlite.org/src/timeline?c=2010-09-29&nd" target="testwindow">Timewarp due to a mis-configured system clock.</a> * <a href="http://core.tcl.tk/tk/finfo?name=tests/id.test" target="testwindow">Show all three separate deletions of "id.test". |
︙ | ︙ |
Changes to test/json.test.
︙ | ︙ | |||
21 22 23 24 25 26 27 | # Make sure we have a build with the json command at all and that it # is not stubbed out. This assumes the current (as of 2016-01-27) # practice of eliminating all trace of the fossil json command when # not configured. If that changes, these conditions might not prevent # the rest of this file from running. fossil test-th-eval "hasfeature json" | | | 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 | # Make sure we have a build with the json command at all and that it # is not stubbed out. This assumes the current (as of 2016-01-27) # practice of eliminating all trace of the fossil json command when # not configured. If that changes, these conditions might not prevent # the rest of this file from running. fossil test-th-eval "hasfeature json" if {[normalize_result] ne "1"} then { puts "Fossil was not compiled with JSON support." test_cleanup_then_return } # We need a JSON parser to effectively test the JSON produced by # fossil. It looks like the one from tcllib is exactly what we need. # On ActiveTcl, add it with teacup. On other platforms, YMMV. |
︙ | ︙ |
Changes to test/merge2.test.
︙ | ︙ | |||
33 34 35 36 37 38 39 40 41 42 43 44 45 46 | expr {srand($i*2+1)} write_file t3 [set f3 [random_changes $f1 2 4 2 0.1]] expr {srand($i*2+1)} write_file t23 [random_changes $f2 2 4 2 0.1] expr {srand($i*2)} write_file t32 [random_changes $f3 2 4 0 0.1] fossil 3-way-merge t1 t2 t3 a23 test merge-$base-$i-23 {[same_file a23 t23]} fossil 3-way-merge t1 t3 t2 a32 test merge-$base-$i-32 {[same_file a32 t32]} } } ############################################################################### | > | 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 | expr {srand($i*2+1)} write_file t3 [set f3 [random_changes $f1 2 4 2 0.1]] expr {srand($i*2+1)} write_file t23 [random_changes $f2 2 4 2 0.1] expr {srand($i*2)} write_file t32 [random_changes $f3 2 4 0 0.1] fossil 3-way-merge t1 t2 t3 a23 if {[regexp {<<<<< BEGIN MERGE CONFLICT:} [read_file a23]]} continue test merge-$base-$i-23 {[same_file a23 t23]} fossil 3-way-merge t1 t3 t2 a32 test merge-$base-$i-32 {[same_file a32 t32]} } } ############################################################################### |
︙ | ︙ |
Changes to test/mv-rm.test.
︙ | ︙ | |||
13 14 15 16 17 18 19 20 21 22 23 24 25 26 | # drh@hwaci.com # http://www.hwaci.com/drh/ # ############################################################################ # # MV / RM Commands # require_no_open_checkout ######################################## # Setup: Add Files and Commit # ######################################## | > > | 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | # drh@hwaci.com # http://www.hwaci.com/drh/ # ############################################################################ # # MV / RM Commands # set path [file dirname [info script]] require_no_open_checkout ######################################## # Setup: Add Files and Commit # ######################################## |
︙ | ︙ |
Changes to test/revert.test.
︙ | ︙ | |||
23 24 25 26 27 28 29 | # Test 'fossil revert' against expected results from 'fossil changes' and # 'fossil addremove -n', as well as by verifying the existence of files # on the file system. 'fossil undo' is called after each test # proc revert-test {testid revertArgs expectedRevertOutput args} { global RESULT set passed 1 | | | | | | | | 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 | # Test 'fossil revert' against expected results from 'fossil changes' and # 'fossil addremove -n', as well as by verifying the existence of files # on the file system. 'fossil undo' is called after each test # proc revert-test {testid revertArgs expectedRevertOutput args} { global RESULT set passed 1 set args [dict merge { -changes {} -addremove {} -exists {} -notexists {} } $args] set result [fossil revert {*}$revertArgs] test_status_list revert-$testid $result $expectedRevertOutput set statusListTests [list -changes changes -addremove {addremove -n}] foreach {key fossilArgs} $statusListTests { set expected [dict get $args $key] set result [fossil {*}$fossilArgs] test_status_list revert-$testid$key $result $expected } set fileExistsTests [list -exists 1 does -notexists 0 should] foreach {key expected verb} $fileExistsTests { foreach path [dict get $args $key] { if {[file exists $path] != $expected} { set passed 0 protOut " Failure: File $verb not exist: $path" } } test revert-$testid$key $passed } fossil undo } require_no_open_checkout test_setup # Prepare first commit |
︙ | ︙ |
Added test/set-manifest.test.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 | # # Copyright (c) 2016 D. Richard Hipp # # This program is free software; you can redistribute it and/or # modify it under the terms of the Simplified BSD License (also # known as the "2-Clause License" or "FreeBSD License".) # # This program is distributed in the hope that it will be useful, # but without any warranty; without even the implied warranty of # merchantability or fitness for a particular purpose. # # Author contact information: # drh@hwaci.com # http://www.hwaci.com/drh/ # ############################################################################ # # Test manifest setting # # We need SHA1 to effectively test the manifest files produced by # fossil. It looks like the one from tcllib is exactly what we need. # On ActiveTcl, add it with teacup. On other platforms, YMMV. # teacup install sha1 package require sha1 proc file_contains {fname match} { set fp [open $fname r] set contents [read $fp] close $fp set lines [split $contents "\n"] foreach line $lines { if {[regexp $match $line]} { return 1 } } return 0 } # We need a respository, so let it have one. test_setup #### Verify classic behavior of the manifest setting # Setting is off by default, and there are no extra files. fossil settings manifest test "set-manifest-1" {[regexp {^manifest *$} $RESULT]} set filelist [glob -nocomplain manifest*] test "set-manifest-1-n" {[llength $filelist] == 0} # Classic behavior: TRUE value creates manifest and manifest.uuid set truths [list true on 1] foreach v $truths { fossil settings manifest $v test "set-manifest-2-$v" {$RESULT eq ""} fossil settings manifest test "set-manifest-2-$v-a" {[regexp "^manifest\\s+\\(local\\)\\s+$v\\s*$" $RESULT]} set filelist [glob manifest*] test "set-manifest-2-$v-n" {[llength $filelist] == 2} foreach f $filelist { test "set-manifest-2-$v-f-$f" {[file isfile $f]} } } # ... and manifest.uuid is the checkout's hash fossil info regexp {(?m)^checkout:\s+([0-9a-f]{40})\s.*$} $RESULT ckoutline ckid set uuid [string trim [read_file "manifest.uuid"]] test "set-manifest-2-uuid" {$ckid eq $uuid} # ... which is also the SHA1 of the file "manifest" before it was # sterilized by appending an extra line when writing the file. The # extra text begins with # and is a full line, so we'll just strip # it with a brute-force substitution. This probably has the right # effect even if the checkin was PGP-signed, but we don't have that # setting turned on for this manifest in any case. regsub {(?m)^#.*\n} [read_file "manifest"] "" manifest set muuid [::sha1::sha1 $manifest] test "set-manifest-2-manifest" {$muuid eq $uuid} # Classic behavior: FALSE value removes manifest and manifest.uuid set falses [list false off 0] foreach v $falses { fossil settings manifest $v test "set-manifest-3-$v" {$RESULT eq ""} fossil settings manifest test "set-manifest-3-$v-a" {[regexp "^manifest\\s+\\(local\\)\\s+$v\\s*$" $RESULT]} set filelist [glob -nocomplain manifest*] test "set-manifest-3-$v-n" {[llength $filelist] == 0} } # Classic behavior: unset removes manifest and manifest.uuid fossil unset manifest test "set-manifest-4" {$RESULT eq ""} fossil settings manifest test "set-manifest-4-a" {[regexp {^manifest *$} $RESULT]} set filelist [glob -nocomplain manifest*] test "set-manifest-4-n" {[llength $filelist] == 0} ##### Tags Manifest feature extends the manifest setting # Manifest Tags: use letters r, u, and t to select each of manifest, # manifest.uuid, and manifest.tags files. set truths [list r u t ru ut rt rut] foreach v $truths { fossil settings manifest $v test "set-manifest-5-$v" {$RESULT eq ""} fossil settings manifest test "set-manifest-5-$v-a" {[regexp "^manifest\\s+\\(local\\)\\s+$v\\s*$" $RESULT]} set filelist [glob manifest*] test "set-manifest-5-$v-n" {[llength $filelist] == [string length $v]} foreach f $filelist { test "set-manifest-5-$v-f-$f" {[file isfile $f]} } } # Quick check for tags applied in trunk test_file_contents "set-manifest-6" "manifest.tags" "branch trunk\ntag trunk\n" ##### Test manifest.tags file content updates after commits # Explicitly set manifest.tags mode fossil set manifest t test "set-manifest-7-1" {[file isfile manifest.tags]} # Add a tag and make sure it appears in manifest.tags fossil tag add manifest-7-tag-1 tip test "set-manifest-7-2" {[file_contains "manifest.tags" "^tag manifest-7-tag-1$"]} # Add a file and make sure tag has disappeared from manifest.tags write_file file1 "file1 contents" fossil add file1 fossil commit -m "Added file1." test "set-manifest-7-3" {![file_contains "manifest.tags" "^tag manifest-7-tag-1$"]} # Add new tag and check that it is in manifest.tags fossil tag add manifest-7-tag-2 tip test "set-manifest-7-4" {[file_contains "manifest.tags" "^tag manifest-7-tag-2$"]} ##### Tags manifest branch= updates # Add file, create new branch on commit and check that # manifest.tags has been updated appropriately write_file file3 "file3 contents" fossil add file3 fossil commit -m "Added file3." --branch manifest-8-branch test "set-manifest-8" {[file_contains "manifest.tags" "^branch manifest-8-branch$"]} test_cleanup |
Added test/settings-repo.test.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 | # # Copyright (c) 2016 D. Richard Hipp # # This program is free software; you can redistribute it and/or # modify it under the terms of the Simplified BSD License (also # known as the "2-Clause License" or "FreeBSD License".) # # This program is distributed in the hope that it will be useful, # but without any warranty; without even the implied warranty of # merchantability or fitness for a particular purpose. # # Author contact information: # drh@hwaci.com # http://www.hwaci.com/drh/ # ############################################################################ # # The "settings" and "unset" commands that may modify the repository. # set path [file dirname [info script]] require_no_open_checkout test_setup ############################################################################### # # Complete syntax as tested: # # fossil settings ?PROPERTY? ?VALUE? ?OPTIONS? # fossil unset PROPERTY ?OPTIONS? # # Where the only supported options are "--global" and "--exact". # ############################################################################### set all_settings [get_all_settings] foreach name $all_settings { # # HACK: Make 100% sure that there are no non-default setting values # present anywhere. # fossil unset $name --exact --global fossil unset $name --exact # # NOTE: Query for the hard-coded default value of this setting and # save it. # fossil test-th-eval "setting $name" set defaults($name) [normalize_result] } ############################################################################### fossil settings bad-setting some_value test settings-set-bad-local { [normalize_result] eq "no such setting: bad-setting" } fossil settings bad-setting some_value --global test settings-set-bad-global { [normalize_result] eq "no such setting: bad-setting" } ############################################################################### fossil unset bad-setting test settings-unset-bad-local { [normalize_result] eq "no such setting: bad-setting" } fossil unset bad-setting --global test settings-unset-bad-global { [normalize_result] eq "no such setting: bad-setting" } ############################################################################### fossil settings ssl some_value test settings-set-ambiguous-local { [normalize_result] eq "ambiguous setting \"ssl\" - might be: ssl-ca-location ssl-identity" } fossil settings ssl some_value --global test settings-set-ambiguous-global { [normalize_result] eq "ambiguous setting \"ssl\" - might be: ssl-ca-location ssl-identity" } ############################################################################### fossil unset ssl test settings-unset-ambiguous-local { [normalize_result] eq "ambiguous setting \"ssl\" - might be: ssl-ca-location ssl-identity" } fossil unset ssl --global test settings-unset-ambiguous-global { [normalize_result] eq "ambiguous setting \"ssl\" - might be: ssl-ca-location ssl-identity" } ############################################################################### set pattern(1) {^%name%$} set pattern(3) {^%name%[ ]+\(global\)[ ]+%value%+$} set pattern(4) {^%name%[ ]+\(local\)[ ]+%value%+$} foreach name $all_settings { if {$name ne "manifest"} { set value #global_for_$name fossil settings $name $value --exact --global set data [normalize_result] test settings-set-$name-global { $data eq "" } fossil settings $name --exact --global set data [normalize_result] test settings-set-check1-$name-global { [regexp -- [string map \ [list %name% $name %value% $value] $pattern(3)] $data] } fossil test-th-eval --open-config "setting $name" set data [normalize_result] test settings-set-check2-$name-global { $data eq $value } fossil unset $name --exact --global set data [normalize_result] test settings-unset-$name-global { $data eq "" } fossil settings $name --exact --global set data [normalize_result] test settings-unset-check1-$name-global { [regexp -- [string map \ [list %name% $name %value% $value] $pattern(1)] $data] } fossil test-th-eval --open-config "setting $name" set data [normalize_result] test settings-unset-check2-$name-global { $data eq $defaults($name) } } set value #local_for_$name fossil settings $name $value --exact set data [normalize_result] test settings-set-$name-local { $data eq "" } fossil settings $name --exact set data [normalize_result] test settings-set-check1-$name-local { [regexp -- [string map \ [list %name% $name %value% $value] $pattern(4)] $data] } fossil test-th-eval --open-config "setting $name" set data [normalize_result] test settings-set-check2-$name-local { $data eq $value } fossil unset $name --exact set data [normalize_result] test settings-unset-$name-local { $data eq "" } fossil settings $name --exact set data [normalize_result] test settings-unset-check1-$name-local { [regexp -- [string map \ [list %name% $name %value% $value] $pattern(1)] $data] } fossil test-th-eval --open-config "setting $name" set data [normalize_result] test settings-unset-check2-$name-local { $data eq $defaults($name) } } ############################################################################### set pattern(5) \ {^%name%[ ]+\n \(overridden by contents of file \.fossil-settings/%name%\)$} set versionable_settings [get_versionable_settings] file mkdir .fossil-settings foreach name $versionable_settings { fossil settings $name --exact set data [normalize_result] test settings-before-versionable-$name { [regexp -- [string map [list %name% $name] $pattern(1)] $data] } set value #versionable_for_$name set fileName [file join .fossil-settings $name] write_file $fileName $value fossil settings $name --exact set data [normalize_result] test settings-set-check1-versionable-$name { [regexp -- [string map [list %name% $name] $pattern(5)] $data] } fossil test-th-eval --open-config "setting $name" set data [normalize_result] test settings-set-check2-versionable-$name { $data eq $value } file delete $fileName fossil settings $name --exact set data [normalize_result] test settings-after-versionable-$name { [regexp -- [string map [list %name% $name] $pattern(1)] $data] } } ############################################################################### test_cleanup |
Added test/settings.test.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 | # # Copyright (c) 2016 D. Richard Hipp # # This program is free software; you can redistribute it and/or # modify it under the terms of the Simplified BSD License (also # known as the "2-Clause License" or "FreeBSD License".) # # This program is distributed in the hope that it will be useful, # but without any warranty; without even the implied warranty of # merchantability or fitness for a particular purpose. # # Author contact information: # drh@hwaci.com # http://www.hwaci.com/drh/ # ############################################################################ # # The "settings" and "unset" commands. # set path [file dirname [info script]]; test_setup ############################################################################### # # Complete syntax as tested: # # fossil settings ?PROPERTY? ?VALUE? ?OPTIONS? # fossil unset PROPERTY ?OPTIONS? # # Where the only supported options are "--global" and "--exact". # ############################################################################### # # NOTE: The [extract_setting_names] procedure extracts the list of setting # names from the line-ending normalized output of the "fossil settings" # command. It assumes that a setting name must begin with a lowercase # letter. It also assumes that any output lines that start with a # lowercase letter contain a setting name starting at that same point. # proc extract_setting_names { data } { set names [list] foreach {dummy name} [regexp \ -all -line -inline -- {^([a-z][a-z0-9\-]*) } $data] { lappend names $name } return $names } ############################################################################### set all_settings [get_all_settings] fossil settings set local_settings [extract_setting_names [normalize_result_no_trim]] fossil settings --global set global_settings [extract_setting_names [normalize_result_no_trim]] foreach name $all_settings { test settings-have-local-$name { [lsearch -exact $local_settings $name] != -1 } test settings-have-global-$name { [lsearch -exact $global_settings $name] != -1 } } foreach name $local_settings { test settings-valid-local-$name { [lsearch -exact $all_settings $name] != -1 } } foreach name $global_settings { test settings-valid-global-$name { [lsearch -exact $all_settings $name] != -1 } } ############################################################################### set pattern(1) {^%name%$} set pattern(2) {^%name%[ ]+\((?:local|global)\)[ ]+[^ ]+$} foreach name $all_settings { fossil settings $name --exact set data [normalize_result] test settings-query-local-$name { [regexp -- [string map [list %name% $name] $pattern(1)] $data] || [regexp -- [string map [list %name% $name] $pattern(2)] $data] } fossil settings $name --exact --global set data [normalize_result] if {$name eq "manifest"} { test settings-query-global-$name { $data eq "cannot set 'manifest' globally" } } else { test settings-query-global-$name { [regexp -- [string map [list %name% $name] $pattern(1)] $data] || [regexp -- [string map [list %name% $name] $pattern(2)] $data] } } } ############################################################################### fossil settings bad-setting test settings-query-bad-local { [normalize_result] eq "no such setting: bad-setting" } fossil settings bad-setting --global test settings-query-bad-global { [normalize_result] eq "no such setting: bad-setting" } ############################################################################### test_cleanup |
Changes to test/stash.test.
︙ | ︙ | |||
13 14 15 16 17 18 19 | # drh@hwaci.com # http://www.hwaci.com/drh/ # ############################################################################ # # # Tests for 'fossil stash' | | | 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | # drh@hwaci.com # http://www.hwaci.com/drh/ # ############################################################################ # # # Tests for 'fossil stash' # # proc knownBug {t tests} { return [expr {$t in $tests ? "knownBug" : ""}] } # Test 'fossil stash' against expected results from 'fossil changes' and |
︙ | ︙ | |||
41 42 43 44 45 46 47 | # -notexists One or more listed files do exist # # Also, if the exit status of fossil stash does not match # expectations, the rest of the areas are not tested. proc test_result_state {testid cmdArgs expectedOutput args} { global RESULT set passed 1 | | | | | | | | 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 | # -notexists One or more listed files do exist # # Also, if the exit status of fossil stash does not match # expectations, the rest of the areas are not tested. proc test_result_state {testid cmdArgs expectedOutput args} { global RESULT set passed 1 set args [dict merge { -changes {} -addremove {} -exists {} -notexists {} -knownbugs {} } $args] set knownbugs [dict get $args "-knownbugs"] set result $::RESULT set code $::CODE if {[lindex $cmdArgs end] eq "-expectError"} { test $testid-CODE {$code} [knownBug "-code" $knownbugs] if {!$code} { return } } else { test $testid-CODE {!$code} [knownBug "-code" $knownbugs] if {$code} { return } } test_status_list $testid $result $expectedOutput [knownBug "-result" $knownbugs] set statusListTests [list -changes changes -addremove {addremove -n}] foreach {key fossilArgs} $statusListTests { set expected [dict get $args $key] set result [fossil {*}$fossilArgs] test_status_list $testid$key $result $expected [knownBug $key $knownbugs] } set fileExistsTests [list -exists 1 does -notexists 0 should] foreach {key expected verb} $fileExistsTests { foreach path [dict get $args $key] { if {[file exists $path] != $expected} { set passed 0 protOut " Failure: File $verb not exist: $path" } } test $testid$key $passed [knownBug $key $knownbugs] } #fossil undo } proc stash-test {testid stashArgs expectedStashOutput args} { fossil stash {*}$stashArgs return [test_result_state stash-$testid "stash $stashArgs" $expectedStashOutput {*}$args] } require_no_open_checkout test_setup # Prepare first commit # |
︙ | ︙ | |||
183 184 185 186 187 188 189 | UPDATE f2 UPDATE f3n ADDED f0 } -changes { ADDED f0 MISSING f1 EDITED f2 | | < < | 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 | UPDATE f2 UPDATE f3n ADDED f0 } -changes { ADDED f0 MISSING f1 EDITED f2 RENAMED f3n } -addremove { DELETED f1 } -exists {f0 f2 f3n} -notexists {f1 f3} # Confirm there is no longer a stash saved fossil stash list test stash-2-list {[first_data_line] eq "empty stash"} |
︙ | ︙ | |||
241 242 243 244 245 246 247 | stash-test 2-1 {save -m "f1b"} { REVERT f1 DELETE f1n } -exists {f1} -notexists {f1n} -knownbugs {-code -result} # TODO: add tests that verify the saved stash is sensible. Possibly # by applying it and checking results. But until the MISSING file # error is fixed, there is nothing stashed to test. | | | | | 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 | stash-test 2-1 {save -m "f1b"} { REVERT f1 DELETE f1n } -exists {f1} -notexists {f1n} -knownbugs {-code -result} # TODO: add tests that verify the saved stash is sensible. Possibly # by applying it and checking results. But until the MISSING file # error is fixed, there is nothing stashed to test. # Test stashing a newly added (but never committed) file. As with # fossil revert, fossil stash save unmanages the new file, but # leaves the copy present on disk. This is undocumented, but # probably sensible. test_setup write_file f1 "f1" write_file f2 "f2" fossil add f1 f2 fossil commit -m "baseline" |
︙ | ︙ | |||
280 281 282 283 284 285 286 | } -addremove { } -exists {f1 f2 f3} -notexists {} fossil status # Test stashing a rename of one file with at least one file # unchanged. This should stash (and revert) just the rename | | | 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 | } -addremove { } -exists {f1 f2 f3} -notexists {} fossil status # Test stashing a rename of one file with at least one file # unchanged. This should stash (and revert) just the rename # operation. Instead it also stores and touches the unchanged file. test_setup write_file f1 "f1" write_file f2 "f2" fossil add f1 f2 fossil commit -m "baseline" fossil mv --hard f2 f2n |
︙ | ︙ | |||
309 310 311 312 313 314 315 | test stash-3-2-show-2 {[regexp {\sf2n} $RESULT]} stash-test 3-2-pop {pop} { UPDATE f1 UPDATE f2n } -changes { RENAMED f2n } -addremove { | < < | | 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 | test stash-3-2-show-2 {[regexp {\sf2n} $RESULT]} stash-test 3-2-pop {pop} { UPDATE f1 UPDATE f2n } -changes { RENAMED f2n } -addremove { } -exists {f1 f2n} -notexists {f2} ######## # fossil stash snapshot ?-m|--comment COMMENT? ?FILES...? test_setup |
︙ | ︙ |
Added test/symlinks.test.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 | # # Copyright (c) 2016 D. Richard Hipp # # This program is free software; you can redistribute it and/or # modify it under the terms of the Simplified BSD License (also # known as the "2-Clause License" or "FreeBSD License".) # # This program is distributed in the hope that it will be useful, # but without any warranty; without even the implied warranty of # merchantability or fitness for a particular purpose. # # Author contact information: # drh@hwaci.com # http://www.hwaci.com/drh/ # ############################################################################ # # Symbolic link tests. # set path [file dirname [info script]] if {$tcl_platform(platform) eq "windows"} { puts "Symlinks are not supported on Windows." test_cleanup_then_return } fossil test-th-eval --open-config "setting allow-symlinks" if {![string is true -strict [normalize_result]]} { puts "Symlinks are not enabled." test_cleanup_then_return } require_no_open_checkout ############################################################################### test_setup; set rootDir [file normalize [pwd]] fossil test-th-eval --open-config {repository} set repository [normalize_result] if {[string length $repository] == 0} { puts "Detection of the open repository file failed." test_cleanup_then_return } ####################################### # Use symbolic link to a directory... # ####################################### file mkdir [file join $rootDir subdirA] exec ln -s [file join $rootDir subdirA] symdirA ############################################################################### write_file [file join $rootDir subdirA f1.txt] "f1" write_file [file join $rootDir subdirA f2.txt] "f2" test symlinks-dir-1 {[file exists [file join $rootDir subdirA f1.txt]] eq 1} test symlinks-dir-2 {[file exists [file join $rootDir symdirA f1.txt]] eq 1} test symlinks-dir-3 {[file exists [file join $rootDir subdirA f2.txt]] eq 1} test symlinks-dir-4 {[file exists [file join $rootDir symdirA f2.txt]] eq 1} fossil add [file join $rootDir symdirA f1.txt] fossil commit -m "c1" ############################################################################### fossil ls test symlinks-dir-5 {[normalize_result] eq "symdirA/f1.txt"} ############################################################################### fossil extras test symlinks-dir-6 {[normalize_result] eq \ "subdirA/f1.txt\nsubdirA/f2.txt\nsymdirA/f2.txt"} ############################################################################### fossil close file delete [file join $rootDir subdirA f1.txt] test symlinks-dir-7 {[file exists [file join $rootDir subdirA f1.txt]] eq 0} test symlinks-dir-8 {[file exists [file join $rootDir symdirA f1.txt]] eq 0} test symlinks-dir-9 {[file exists [file join $rootDir subdirA f2.txt]] eq 1} test symlinks-dir-10 {[file exists [file join $rootDir symdirA f2.txt]] eq 1} ############################################################################### fossil open $repository set code [catch {file readlink [file join $rootDir symdirA]} result] test symlinks-dir-11 {$code == 0} test symlinks-dir-12 {$result eq [file join $rootDir subdirA]} test symlinks-dir-13 {[file exists [file join $rootDir subdirA f1.txt]] eq 1} test symlinks-dir-14 {[file exists [file join $rootDir symdirA f1.txt]] eq 1} test symlinks-dir-15 {[file exists [file join $rootDir subdirA f2.txt]] eq 1} test symlinks-dir-16 {[file exists [file join $rootDir symdirA f2.txt]] eq 1} ############################################################################### # # TODO: Add tests for symbolic links as files here, including tests with the # "allow-symlinks" setting on and off. # ############################################################################### test_cleanup |
Changes to test/tester.tcl.
︙ | ︙ | |||
134 135 136 137 138 139 140 | # Sets the CODE and RESULT global variables for use in # test expressions. # proc fossil_maybe_answer {answer args} { global fossilexe set cmd $fossilexe set expectError 0 | | > | > > > > > > > > > | > > > > | > | 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 | # Sets the CODE and RESULT global variables for use in # test expressions. # proc fossil_maybe_answer {answer args} { global fossilexe set cmd $fossilexe set expectError 0 set index [lsearch -exact $args -expectError] if {$index != -1} { set expectError 1 set args [lreplace $args $index $index] } set keepNewline 0 set index [lsearch -exact $args -keepNewline] if {$index != -1} { set keepNewline 1 set args [lreplace $args $index $index] } foreach a $args { lappend cmd $a } protOut $cmd flush stdout if {[string length $answer] > 0} { protOut $answer set prompt_file [file join $::tempPath fossil_prompt_answer] write_file $prompt_file $answer\n if {$keepNewline} { set rc [catch {eval exec -keepnewline $cmd <$prompt_file} result] } else { set rc [catch {eval exec $cmd <$prompt_file} result] } file delete $prompt_file } else { if {$keepNewline} { set rc [catch {eval exec -keepnewline $cmd} result] } else { set rc [catch {eval exec $cmd} result] } } global RESULT CODE set CODE $rc if {($rc && !$expectError) || (!$rc && $expectError)} { protOut "ERROR: $result" 1 } elseif {$::VERBOSE} { protOut "RESULT: $result" |
︙ | ︙ | |||
184 185 186 187 188 189 190 191 192 193 194 195 196 197 | fconfigure $out -translation binary puts -nonewline $out $txt close $out } proc write_file_indented {filename txt} { write_file $filename [string trim [string map [list "\n " \n] $txt]]\n } # Return true if two files are the same # proc same_file {a b} { set x [read_file $a] regsub -all { +\n} $x \n x set y [read_file $b] | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 | fconfigure $out -translation binary puts -nonewline $out $txt close $out } proc write_file_indented {filename txt} { write_file $filename [string trim [string map [list "\n " \n] $txt]]\n } # Returns the list of all supported versionable settings. # proc get_versionable_settings {} { # # TODO: If the list of supported versionable settings in "db.c" is modified, # this list (and procedure) most likely needs to be modified as well. # set result [list \ allow-symlinks \ binary-glob \ clean-glob \ crnl-glob \ dotfiles \ empty-dirs \ encoding-glob \ ignore-glob \ keep-glob \ manifest \ th1-setup \ th1-uri-regexp] fossil test-th-eval "hasfeature tcl" if {[normalize_result] eq "1"} { lappend result tcl-setup } return [lsort -dictionary $result] } # Returns the list of all supported settings. # proc get_all_settings {} { # # TODO: If the list of supported settings in "db.c" is modified, this list # (and procedure) most likely needs to be modified as well. # set result [list \ access-log \ admin-log \ allow-symlinks \ auto-captcha \ auto-hyperlink \ auto-shun \ autosync \ autosync-tries \ binary-glob \ case-sensitive \ clean-glob \ clearsign \ crnl-glob \ default-perms \ diff-binary \ diff-command \ dont-push \ dotfiles \ editor \ empty-dirs \ encoding-glob \ exec-rel-paths \ gdiff-command \ gmerge-command \ hash-digits \ http-port \ https-login \ ignore-glob \ keep-glob \ localauth \ main-branch \ manifest \ max-loadavg \ max-upload \ mtime-changes \ pgp-command \ proxy \ relative-paths \ repo-cksum \ self-register \ ssh-command \ ssl-ca-location \ ssl-identity \ th1-setup \ th1-uri-regexp \ uv-sync \ web-browser] fossil test-th-eval "hasfeature legacyMvRm" if {[normalize_result] eq "1"} { lappend result mv-rm-files } fossil test-th-eval "hasfeature tcl" if {[normalize_result] eq "1"} { lappend result tcl tcl-setup } fossil test-th-eval "hasfeature th1Docs" if {[normalize_result] eq "1"} { lappend result th1-docs } fossil test-th-eval "hasfeature th1Hooks" if {[normalize_result] eq "1"} { lappend result th1-hooks } return [lsort -dictionary $result] } # Return true if two files are the same # proc same_file {a b} { set x [read_file $a] regsub -all { +\n} $x \n x set y [read_file $b] |
︙ | ︙ | |||
325 326 327 328 329 330 331 | } } # This procedure only returns non-zero if the Tcl integration feature was # enabled at compile-time and is now enabled at runtime. proc is_tcl_usable_by_fossil {} { fossil test-th-eval "hasfeature tcl" | | | | | | | | 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 | } } # This procedure only returns non-zero if the Tcl integration feature was # enabled at compile-time and is now enabled at runtime. proc is_tcl_usable_by_fossil {} { fossil test-th-eval "hasfeature tcl" if {[normalize_result] ne "1"} {return 0} fossil test-th-eval "setting tcl" if {[normalize_result] eq "1"} {return 1} fossil test-th-eval --open-config "setting tcl" if {[normalize_result] eq "1"} {return 1} return [info exists ::env(TH1_ENABLE_TCL)] } # This procedure only returns non-zero if the TH1 hooks feature was enabled # at compile-time and is now enabled at runtime. proc are_th1_hooks_usable_by_fossil {} { fossil test-th-eval "hasfeature th1Hooks" if {[normalize_result] ne "1"} {return 0} fossil test-th-eval "setting th1-hooks" if {[normalize_result] eq "1"} {return 1} fossil test-th-eval --open-config "setting th1-hooks" if {[normalize_result] eq "1"} {return 1} return [info exists ::env(TH1_ENABLE_HOOKS)] } # This (rarely used) procedure is designed to run a test within the Fossil # source checkout (e.g. one that does NOT modify any state), while saving # and restoring the current directory (e.g. one used when running a test # file outside of the Fossil source checkout). Please do NOT use this |
︙ | ︙ | |||
445 446 447 448 449 450 451 | # # NOTE: Check if we can use any of the environment variables. # foreach name $names { set value [getEnvironmentVariable $name] | | | | | 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 | # # NOTE: Check if we can use any of the environment variables. # foreach name $names { set value [getEnvironmentVariable $name] if {[string length $value] > 0} { set value [file normalize $value] if {[file exists $value] && [file isdirectory $value]} { return $value } } } # # NOTE: On non-Windows systems, fallback to /tmp if it is usable. # if {$::tcl_platform(platform) ne "windows"} { set value /tmp if {[file exists $value] && [file isdirectory $value]} { return $value } } # # NOTE: There must be a usable temporary directory to continue testing. # |
︙ | ︙ | |||
617 618 619 620 621 622 623 624 625 626 627 628 629 630 | set line [string range $line 0 $i]$stuff[string range $line $ip1 end] } } append out \n$line } return [string range $out 1 end] } # Executes the "fossil http" command. The entire content of the HTTP request # is read from the data file name, with [subst] being performed on it prior to # submission. Temporary input and output files are created and deleted. The # result will be the contents of the temoprary output file. proc test_fossil_http { repository dataFileName url } { set suffix [appendArgs [pid] - [getSeqNo] - [clock seconds] .txt] | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 | set line [string range $line 0 $i]$stuff[string range $line $ip1 end] } } append out \n$line } return [string range $out 1 end] } # This procedure executes the "fossil server" command. The return value # is a list comprised of the new process identifier and the port on which # the server started. The varName argument refers to a variable # where the "stop argument" is to be stored. This value must eventually be # passed to the [test_stop_server] procedure. proc test_start_server { repository {varName ""} } { global fossilexe tempPath set command [list exec $fossilexe server --localhost] if {[string length $varName] > 0} { upvar 1 $varName stopArg } if {$::tcl_platform(platform) eq "windows"} { set stopArg [file join [getTemporaryPath] [appendArgs \ [string trim [clock seconds] -] _ [getSeqNo] .stopper]] lappend command --stopper $stopArg } set outFileName [file join $tempPath [appendArgs \ fossil_server_ [string trim [clock seconds] -] _ \ [getSeqNo]]].out lappend command $repository >&$outFileName & set pid [eval $command] if {$::tcl_platform(platform) ne "windows"} { set stopArg $pid } after 1000; # output might not be there yet set output [read_file $outFileName] if {![regexp {Listening.*TCP port (\d+)} $output dummy port]} { puts stdout "Could not detect Fossil server port, using default..." set port 8080; # return the default port just in case } return [list $pid $port $outFileName] } # This procedure stops a Fossil server instance that was previously started # by the [test_start_server] procedure. The value of the "stop argument" # will vary by platform as will the exact method used to stop the server. # The fileName argument is the name of a temporary output file to delete. proc test_stop_server { stopArg pid fileName } { if {$::tcl_platform(platform) eq "windows"} { # # NOTE: On Windows, the "stop argument" must be the name of a file # that does NOT already exist. # if {[string length $stopArg] > 0 && \ ![file exists $stopArg] && \ [catch {write_file $stopArg [clock seconds]}] == 0} { while {1} { if {[catch { # # NOTE: Using the TaskList utility requires Windows XP or # later. # exec tasklist.exe /FI "PID eq $pid" } result] != 0 || ![regexp -- " $pid " $result]} { break } after 1000; # wait a bit... } file delete $stopArg if {[string length $fileName] > 0} { file delete $fileName } return true } } else { # # NOTE: On Unix, the "stop argument" must be an integer identifier # that refers to an existing process. # if {[regexp {^(?:-)?\d+$} $stopArg] && \ [catch {exec kill -TERM $stopArg}] == 0} { while {1} { if {[catch { # # TODO: Is this portable to all the supported variants of # Unix? It should be, it's POSIX. # exec ps -p $pid } result] != 0 || ![regexp -- "(?:^$pid| $pid) " $result]} { break } after 1000; # wait a bit... } if {[string length $fileName] > 0} { file delete $fileName } return true } } return false } # Executes the "fossil http" command. The entire content of the HTTP request # is read from the data file name, with [subst] being performed on it prior to # submission. Temporary input and output files are created and deleted. The # result will be the contents of the temoprary output file. proc test_fossil_http { repository dataFileName url } { set suffix [appendArgs [pid] - [getSeqNo] - [clock seconds] .txt] |
︙ | ︙ | |||
653 654 655 656 657 658 659 660 661 662 663 664 665 666 | return [incr seqNo] } # fixup the whitespace in the result to make it easier to compare. proc normalize_result {} { return [string map [list \r\n \n] [string trim $::RESULT]] } # returns the first line of the normalized result. proc first_data_line {} { return [lindex [split [normalize_result] \n] 0] } # returns the second line of the normalized result. | > > > > > | 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 | return [incr seqNo] } # fixup the whitespace in the result to make it easier to compare. proc normalize_result {} { return [string map [list \r\n \n] [string trim $::RESULT]] } # fixup the line-endings in the result to make it easier to compare. proc normalize_result_no_trim {} { return [string map [list \r\n \n] $::RESULT] } # returns the first line of the normalized result. proc first_data_line {} { return [lindex [split [normalize_result] \n] 0] } # returns the second line of the normalized result. |
︙ | ︙ |
Changes to test/th1-docs.test.
︙ | ︙ | |||
16 17 18 19 20 21 22 | ############################################################################ # # TH1 Docs # fossil test-th-eval "hasfeature th1Docs" | | | | 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 | ############################################################################ # # TH1 Docs # fossil test-th-eval "hasfeature th1Docs" if {[normalize_result] ne "1"} { puts "Fossil was not compiled with TH1 docs support." test_cleanup_then_return } fossil test-th-eval "hasfeature tcl" if {[normalize_result] ne "1"} { puts "Fossil was not compiled with Tcl support." test_cleanup_then_return } ############################################################################### test_setup "" |
︙ | ︙ |
Changes to test/th1-hooks.test.
︙ | ︙ | |||
16 17 18 19 20 21 22 | ############################################################################ # # TH1 Hooks # fossil test-th-eval "hasfeature th1Hooks" | | | 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | ############################################################################ # # TH1 Hooks # fossil test-th-eval "hasfeature th1Hooks" if {[normalize_result] ne "1"} { puts "Fossil was not compiled with TH1 hooks support." test_cleanup_then_return } ############################################################################### test_setup |
︙ | ︙ | |||
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | } elseif {$::cmd_name eq "test4"} { emit_hook_log return -code 2 "TH_RETURN return code" } elseif {$::cmd_name eq "timeline"} { set length [llength $::cmd_args] set length [expr {$length - 1}] if {[lindex $::cmd_args $length] eq "custom"} { emit_hook_log return "custom timeline" } elseif {[lindex $::cmd_args $length] eq "now"} { emit_hook_log return "now timeline" } else { emit_hook_log error "unsupported timeline" } | > > > > > | 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 | } elseif {$::cmd_name eq "test4"} { emit_hook_log return -code 2 "TH_RETURN return code" } elseif {$::cmd_name eq "timeline"} { set length [llength $::cmd_args] set length [expr {$length - 1}] if {[lindex $::cmd_args $length] eq "custom"} { append_hook_log "CUSTOM TIMELINE" emit_hook_log return "custom timeline" } elseif {[lindex $::cmd_args $length] eq "custom2"} { emit_hook_log puts "+++ some stuff here +++" continue "custom2 timeline" } elseif {[lindex $::cmd_args $length] eq "now"} { emit_hook_log return "now timeline" } else { emit_hook_log error "unsupported timeline" } |
︙ | ︙ | |||
120 121 122 123 124 125 126 | ############################################################################### saveTh1SetupFile; writeTh1SetupFile $testTh1Setup ############################################################################### | | | > | > > > > > | 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 | ############################################################################### saveTh1SetupFile; writeTh1SetupFile $testTh1Setup ############################################################################### fossil timeline custom -expectError; # NOTE: Bad "WHEN" argument. test th1-cmd-hooks-1a {[normalize_result] eq \ {<h1><b>command_hook timeline CUSTOM TIMELINE</b></h1> unknown check-in or invalid date: custom}} ############################################################################### fossil timeline custom2; # NOTE: Bad "WHEN" argument. test th1-cmd-hooks-1b {[normalize_result] eq \ {<h1><b>command_hook timeline</b></h1> +++ some stuff here +++ <h1><b>command_hook timeline command_notify timeline</b></h1>}} ############################################################################### fossil timeline test th1-cmd-hooks-2a {[first_data_line] eq \ {<h1><b>command_hook timeline</b></h1>}} |
︙ | ︙ |
Changes to test/th1-repo.test.
︙ | ︙ | |||
17 18 19 20 21 22 23 24 25 26 27 28 29 30 | # Chris Drexler <ckolumbus@ac-drexler.de> # ############################################################################ # # TH1 tests that may modify the repository # require_no_open_checkout ######################################## # Setup: Add Files and Commit # ######################################## test_setup; set rootDir [file normalize [pwd]] | > > | 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 | # Chris Drexler <ckolumbus@ac-drexler.de> # ############################################################################ # # TH1 tests that may modify the repository # set path [file dirname [info script]] require_no_open_checkout ######################################## # Setup: Add Files and Commit # ######################################## test_setup; set rootDir [file normalize [pwd]] |
︙ | ︙ | |||
49 50 51 52 53 54 55 | write_file [file join $rootDir subdirC f11t.xt] "f11" set files_md [list subdirB/f5.md subdirB/f6.md subdirB/f8.md subdirC/f10.md] fossil add $rootDir fossil commit -m "c1" | < < | 51 52 53 54 55 56 57 58 59 60 61 62 63 64 | write_file [file join $rootDir subdirC f11t.xt] "f11" set files_md [list subdirB/f5.md subdirB/f6.md subdirB/f8.md subdirC/f10.md] fossil add $rootDir fossil commit -m "c1" ############################################################################### fossil test-th-eval --open-config "dir trunk subdir*/*.md" test th1-dir-1 {[llength $RESULT] eq [llength $files_md]} set n 1 foreach i $RESULT j $files_md { |
︙ | ︙ |
Changes to test/th1-tcl.test.
︙ | ︙ | |||
14 15 16 17 18 19 20 | # http://www.hwaci.com/drh/ # ############################################################################ # # TH1/Tcl integration # | | | | | 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 | # http://www.hwaci.com/drh/ # ############################################################################ # # TH1/Tcl integration # set path [file dirname [info script]] ############################################################################### fossil test-th-eval "hasfeature tcl" if {[normalize_result] ne "1"} { puts "Fossil was not compiled with Tcl support." test_cleanup_then_return } ############################################################################### test_setup ############################################################################### set env(TH1_ENABLE_TCL) 1; # Tcl integration must be enabled for this test. ############################################################################### fossil test-th-render --open-config \ [file nativename [file join $path th1-tcl1.txt]] test th1-tcl-1 {[regexp -- {^tclReady\(before\) = 0 tclReady\(after\) = 1 \d+ \d+ \d+ via Tcl invoke |
︙ | ︙ | |||
63 64 65 66 67 68 69 | one_word three words now$} [normalize_result]]} ############################################################################### if {[catch {package require sqlite3}] == 0} { fossil test-th-render --open-config \ | | | | | | | | | | | | | | | | | | | | 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 | one_word three words now$} [normalize_result]]} ############################################################################### if {[catch {package require sqlite3}] == 0} { fossil test-th-render --open-config \ [file nativename [file join $path th1-tcl2.txt]] test th1-tcl-2 {[regexp -- {^\d+$} [normalize_result]]} } else { puts stderr "Skipping 'th1-tcl-2', SQLite package for Tcl not available" } ############################################################################### fossil test-th-render --open-config \ [file nativename [file join $path th1-tcl3.txt]] test th1-tcl-3 {$RESULT eq {<hr /><p class="thmainError">ERROR:\ invalid command name "bad_command"</p>}} ############################################################################### fossil test-th-render --open-config \ [file nativename [file join $path th1-tcl4.txt]] test th1-tcl-4 {$RESULT eq {<hr /><p class="thmainError">ERROR:\ divide by zero</p>}} ############################################################################### fossil test-th-render --open-config \ [file nativename [file join $path th1-tcl5.txt]] test th1-tcl-5 {$RESULT eq {<hr /><p class="thmainError">ERROR:\ Tcl command not found: bad_command</p>} || $RESULT eq {<hr /><p\ class="thmainError">ERROR: invalid command name "bad_command"</p>}} ############################################################################### fossil test-th-render --open-config \ [file nativename [file join $path th1-tcl6.txt]] test th1-tcl-6 {$RESULT eq {<hr /><p class="thmainError">ERROR:\ no such command: bad_command</p>}} ############################################################################### fossil test-th-render --open-config \ [file nativename [file join $path th1-tcl7.txt]] test th1-tcl-7 {$RESULT eq {<hr /><p class="thmainError">ERROR:\ syntax error in expression: "2**0"</p>}} ############################################################################### fossil test-th-render --open-config \ [file nativename [file join $path th1-tcl8.txt]] test th1-tcl-8 {$RESULT eq {<hr /><p class="thmainError">ERROR:\ cannot invoke Tcl command: tailcall</p>} || $RESULT eq {<hr /><p\ class="thmainError">ERROR: tailcall can only be called from a proc or\ lambda</p>} || $RESULT eq {<hr /><p class="thmainError">ERROR: This test\ requires Tcl 8.6 or higher.</p>}} ############################################################################### fossil test-th-render --open-config \ [file nativename [file join $path th1-tcl9.txt]] test th1-tcl-9 {[string trim $RESULT] eq [list [file tail $fossilexe] 3 \ [list test-th-render --open-config [file nativename [file join $path \ th1-tcl9.txt]]]]} ############################################################################### fossil test-th-eval "tclMakeSafe a" test th1-tcl-10 {[normalize_result] eq \ {TH_ERROR: wrong # args: should be "tclMakeSafe"}} |
︙ | ︙ |
Changes to test/th1.test.
︙ | ︙ | |||
14 15 16 17 18 19 20 | # http://www.hwaci.com/drh/ # ############################################################################ # # TH1 Commands # | | | 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | # http://www.hwaci.com/drh/ # ############################################################################ # # TH1 Commands # set path [file dirname [info script]]; test_setup ############################################################################### set th1Tcl [is_tcl_usable_by_fossil] set th1Hooks [are_th1_hooks_usable_by_fossil] ############################################################################### |
︙ | ︙ | |||
552 553 554 555 556 557 558 559 560 561 562 563 564 565 | ############################################################################### fossil test-th-eval "lindex list -0x" test th1-expr-49 {$RESULT eq {TH_ERROR: expected integer, got: "-0x"}} ############################################################################### run_in_checkout { # NOTE: The "1" here forces the checkout to be opened. fossil test-th-eval "checkout 1" } test th1-checkout-1 {[string length $RESULT] > 0} | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 | ############################################################################### fossil test-th-eval "lindex list -0x" test th1-expr-49 {$RESULT eq {TH_ERROR: expected integer, got: "-0x"}} ############################################################################### foreach perm [list a b c d e f g h i j k l m n o p q r s t u v w x y z] { if {$perm eq "u"} continue; # NOTE: Skip "reader" meta-permission. if {$perm eq "v"} continue; # NOTE: Skip "developer" meta-permission. fossil test-th-eval "anycap $perm" test th1-anycap-no-$perm-1 {$RESULT eq {0}} fossil test-th-eval "hascap $perm" test th1-hascap-no-$perm-1 {$RESULT eq {0}} fossil test-th-eval "anoncap $perm" test th1-anoncap-no-$perm-1 {$RESULT eq {0}} run_in_checkout { fossil test-th-eval --set-user-caps "anycap $perm" test th1-anycap-yes-$perm-1 {$RESULT eq {1}} set ::env(TH1_TEST_USER_CAPS) 1; # NOTE: Bad permission. fossil test-th-eval --set-user-caps "anycap $perm" test th1-anycap-no-$perm-1 {$RESULT eq {0}} unset ::env(TH1_TEST_USER_CAPS) fossil test-th-eval --set-user-caps "hascap $perm" test th1-hascap-yes-$perm-1 {$RESULT eq {1}} set ::env(TH1_TEST_USER_CAPS) 1; # NOTE: Bad permission. fossil test-th-eval --set-user-caps "hascap $perm" test th1-hascap-no-$perm-1 {$RESULT eq {0}} unset ::env(TH1_TEST_USER_CAPS) fossil test-th-eval --set-anon-caps "anoncap $perm" test th1-anoncap-yes-$perm-1 {$RESULT eq {1}} set ::env(TH1_TEST_ANON_CAPS) 1; # NOTE: Bad permission. fossil test-th-eval --set-anon-caps "anoncap $perm" test th1-anoncap-no-$perm-1 {$RESULT eq {0}} unset ::env(TH1_TEST_ANON_CAPS) } } ############################################################################### fossil test-th-eval "anycap oh" test th1-anycap-no-multiple-1 {$RESULT eq {0}} ############################################################################### fossil test-th-eval "hascap oh" test th1-hascap-no-multiple-1 {$RESULT eq {0}} ############################################################################### fossil test-th-eval "hascap o h" test th1-hascap-no-multiple-2 {$RESULT eq {0}} ############################################################################### fossil test-th-eval "anoncap oh" test th1-anoncap-no-multiple-1 {$RESULT eq {0}} ############################################################################### fossil test-th-eval "anoncap o h" test th1-anoncap-no-multiple-2 {$RESULT eq {0}} ############################################################################### run_in_checkout { fossil test-th-eval --set-user-caps "anycap oh" test th1-anycap-yes-multiple-1 {$RESULT eq {1}} set ::env(TH1_TEST_USER_CAPS) o fossil test-th-eval --set-user-caps "anycap oh" test th1-anycap-yes-multiple-2 {$RESULT eq {1}} unset ::env(TH1_TEST_USER_CAPS) fossil test-th-eval --set-user-caps "hascap oh" test th1-hascap-yes-multiple-1 {$RESULT eq {1}} set ::env(TH1_TEST_USER_CAPS) o fossil test-th-eval --set-user-caps "hascap oh" test th1-hascap-no-multiple-3 {$RESULT eq {0}} unset ::env(TH1_TEST_USER_CAPS) fossil test-th-eval --set-user-caps "hascap o h" test th1-hascap-yes-multiple-2 {$RESULT eq {1}} set ::env(TH1_TEST_USER_CAPS) o fossil test-th-eval --set-user-caps "hascap o h" test th1-hascap-no-multiple-4 {$RESULT eq {0}} unset ::env(TH1_TEST_USER_CAPS) fossil test-th-eval --set-anon-caps "anoncap oh" test th1-anoncap-yes-multiple-1 {$RESULT eq {1}} set ::env(TH1_TEST_ANON_CAPS) o fossil test-th-eval --set-anon-caps "anoncap oh" test th1-anoncap-no-multiple-3 {$RESULT eq {0}} unset ::env(TH1_TEST_ANON_CAPS) fossil test-th-eval --set-anon-caps "anoncap o h" test th1-anoncap-yes-multiple-2 {$RESULT eq {1}} set ::env(TH1_TEST_ANON_CAPS) o fossil test-th-eval --set-anon-caps "anoncap o h" test th1-anoncap-no-multiple-4 {$RESULT eq {0}} unset ::env(TH1_TEST_ANON_CAPS) } ############################################################################### run_in_checkout { # NOTE: The "1" here forces the checkout to be opened. fossil test-th-eval "checkout 1" } test th1-checkout-1 {[string length $RESULT] > 0} |
︙ | ︙ | |||
912 913 914 915 916 917 918 | fossil test-th-eval "reinitialize 1; globalState configuration" test th1-reinitialize-2 {$RESULT ne ""} ############################################################################### # # NOTE: This test will fail if the command names are added to TH1, or | | | | 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 | fossil test-th-eval "reinitialize 1; globalState configuration" test th1-reinitialize-2 {$RESULT ne ""} ############################################################################### # # NOTE: This test will fail if the command names are added to TH1, or # moved from Tcl builds to plain or the reverse. Sorting the # command lists eliminates a dependence on order. # fossil test-th-eval "info commands" set sorted_result [lsort $RESULT] protOut "Sorted: $sorted_result" set base_commands {anoncap anycap array artifact break breakpoint catch\ checkout combobox continue date decorate dir enable_output encode64\ error expr for getParameter glob_match globalState hascap hasfeature\ html htmlize http httpize if info insertCsrf lindex linecount list\ llength lsearch markdown proc puts query randhex redirect regexp\ reinitialize rename render repository return searchable set\ setParameter setting stime string styleFooter styleHeader tclReady\ trace unset unversioned uplevel upvar utime verifyCsrf wiki} set tcl_commands {tclEval tclExpr tclInvoke tclIsSafe tclMakeSafe} if {$th1Tcl} { test th1-info-commands-1 {$sorted_result eq [lsort "$base_commands $tcl_commands"]} } else { test th1-info-commands-1 {$sorted_result eq [lsort "$base_commands"]} } |
︙ | ︙ | |||
1355 1356 1357 1358 1359 1360 1361 | <p><em>This is a test.</em></p> </div> }}} ############################################################################### | | | 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 | <p><em>This is a test.</em></p> </div> }}} ############################################################################### set markdown [read_file [file join $path markdown-test1.md]] fossil test-th-eval [string map \ [list %markdown% $markdown] {markdown {%markdown%}}] test th1-markdown-5 {[normalize_result] eq \ {{Markdown Formatter Test Document} {<div class="markdown"> <p>This document is designed to test the markdown formatter.</p> |
︙ | ︙ | |||
1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 | return [string trim $x] set y; # NOTE: Never hit. } fossil test-th-source $th1FileName test th1-source-1 {$RESULT eq {TH_RETURN: 0 1 2 3 4 5 6 7 8 9}} file delete $th1FileName ############################################################################### test_cleanup | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 | return [string trim $x] set y; # NOTE: Never hit. } fossil test-th-source $th1FileName test th1-source-1 {$RESULT eq {TH_RETURN: 0 1 2 3 4 5 6 7 8 9}} file delete $th1FileName ############################################################################### # # TODO: Modify the result of this test if the list of unversioned files # changes. # run_in_checkout { fossil test-th-eval --open-config "unversioned list" } test th1-unversioned-1 {[normalize_result] eq \ {build-icons/linux.gif build-icons/linux64.gif build-icons/mac.gif\ build-icons/openbsd.gif build-icons/src.gif build-icons/win32.gif\ download.html download/fossil-linux-x86-1.32.zip\ download/fossil-linux-x86-1.33.zip download/fossil-linux-x86-1.34.zip\ download/fossil-linux-x86-1.35.zip download/fossil-macosx-x86-1.32.zip\ download/fossil-macosx-x86-1.33.zip download/fossil-macosx-x86-1.34.zip\ download/fossil-macosx-x86-1.35.zip download/fossil-openbsd-x86-1.32.zip\ download/fossil-openbsd-x86-1.33.zip download/fossil-openbsd-x86-1.34.tar.gz\ download/fossil-openbsd-x86-1.35.tar.gz download/fossil-src-1.32.tar.gz\ download/fossil-src-1.33.tar.gz download/fossil-src-1.34.tar.gz\ download/fossil-src-1.35.tar.gz download/fossil-w32-1.32.zip\ download/fossil-w32-1.33.zip download/fossil-w32-1.34.zip\ download/fossil-w32-1.35.zip download/releasenotes-1.32.html\ download/releasenotes-1.33.html download/releasenotes-1.34.html\ download/releasenotes-1.35.html index.wiki}} ############################################################################### run_in_checkout { fossil test-th-eval --open-config \ {string length [unversioned content build-icons/src.gif]} } test th1-unversioned-2 {$RESULT eq {4592}} ############################################################################### test_cleanup |
Added test/unversioned.test.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 | # # Copyright (c) 2016 D. Richard Hipp # # This program is free software; you can redistribute it and/or # modify it under the terms of the Simplified BSD License (also # known as the "2-Clause License" or "FreeBSD License".) # # This program is distributed in the hope that it will be useful, # but without any warranty; without even the implied warranty of # merchantability or fitness for a particular purpose. # # Author contact information: # drh@hwaci.com # http://www.hwaci.com/drh/ # ############################################################################ # # The "unversioned" command. # set path [file dirname [info script]] if {[catch {package require sha1}] != 0} then { puts "The \"sha1\" package is not available." test_cleanup_then_return } require_no_open_checkout test_setup; set rootDir [file normalize [pwd]] fossil test-th-eval --open-config {repository} set repository [normalize_result] if {[string length $repository] == 0} { puts "Detection of the open repository file failed." test_cleanup_then_return } write_file unversioned1.txt "This is unversioned file #1." write_file unversioned2.txt " This is unversioned file #2. " write_file "unversioned space.txt" "\nThis is unversioned file #3.\n" write_file unversioned4.txt "This is unversioned file #4." write_file unversioned5.txt "This is unversioned file #5." set env(VISUAL) [appendArgs \ [info nameofexecutable] " " [file join $path fake-editor.tcl]] ############################################################################### fossil unversioned test unversioned-1 {[normalize_result] eq \ [string map [list %fossil% [file nativename $fossilexe]] {Usage: %fossil%\ unversioned add|cat|edit|export|list|revert|remove|sync|touch}]} ############################################################################### fossil unversioned list test unversioned-2 {[normalize_result] eq {}} ############################################################################### fossil unversioned cat not-found.txt test unversioned-3 {[normalize_result] eq {}} ############################################################################### fossil unversioned cat unversioned1.txt test unversioned-4 {[normalize_result] eq {}} ############################################################################### fossil unversioned add unversioned1.txt test unversioned-5 {[normalize_result] eq {}} ############################################################################### fossil unversioned cat unversioned1.txt test unversioned-6 {[normalize_result] eq {This is unversioned file #1.}} ############################################################################### fossil unversioned list test unversioned-7 {[regexp \ {^[0-9a-f]{12} \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} 28 28\ unversioned1\.txt$} [normalize_result]]} ############################################################################### fossil unversioned ls test unversioned-8 {[normalize_result] eq {unversioned1.txt}} ############################################################################### fossil unversioned remove unversioned1.txt test unversioned-9 {[normalize_result] eq {}} ############################################################################### fossil unversioned list test unversioned-10 {[normalize_result] eq {}} ############################################################################### fossil unversioned ls test unversioned-11 {[normalize_result] eq {}} ############################################################################### fossil unversioned list --all test unversioned-12 {[regexp \ {^\(deleted\) \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} 0 0\ unversioned1\.txt$} [normalize_result]]} ############################################################################### fossil unversioned ls --all test unversioned-13 {[normalize_result] eq {unversioned1.txt}} ############################################################################### fossil unversioned add "unversioned space.txt" -expectError test unversioned-14 {[normalize_result] eq \ {names of unversioned files may not contain whitespace}} ############################################################################### fossil unversioned add "unversioned space.txt" --as unversioned3.txt test unversioned-15 {[normalize_result] eq {}} ############################################################################### fossil unversioned list test unversioned-16 {[regexp \ {^[0-9a-f]{12} \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} 30 30\ unversioned3\.txt$} [normalize_result]]} ############################################################################### fossil unversioned ls --l test unversioned-17 {[regexp \ {^[0-9a-f]{12} \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} 30 30\ unversioned3\.txt$} [normalize_result]]} ############################################################################### fossil unversioned ls test unversioned-18 {[normalize_result] eq {unversioned3.txt}} ############################################################################### fossil unversioned add unversioned2.txt --mtime 2016-10-01 test unversioned-19 {[normalize_result] eq {}} ############################################################################### fossil unversioned list test unversioned-20 {[regexp \ {^[0-9a-f]{12} 2016-10-01 00:00:00 30 30\ unversioned2\.txt [0-9a-f]{12} \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} 30 30\ unversioned3\.txt$} [normalize_result]]} ############################################################################### fossil unversioned ls test unversioned-21 {[normalize_result] eq {unversioned2.txt unversioned3.txt}} ############################################################################### fossil unversioned cat unversioned1.txt test unversioned-22 {[normalize_result] eq {}} ############################################################################### fossil unversioned cat unversioned2.txt test unversioned-23 {[::sha1::sha1 $RESULT] eq \ {962f96ebd613e4fdd9aa2d20bd9fe21a64e925f2}} ############################################################################### fossil unversioned cat unversioned3.txt -keepNewline test unversioned-24 {[::sha1::sha1 $RESULT] eq \ {c6b95509120d9703cc4fbe5cdfcb435b5912b3e4}} ############################################################################### fossil unversioned rm unversioned3.txt test unversioned-25 {[normalize_result] eq {}} ############################################################################### fossil unversioned add unversioned4.txt test unversioned-26 {[normalize_result] eq {}} ############################################################################### fossil unversioned cat unversioned4.txt set hash(before) [::sha1::sha1 $RESULT] test unversioned-27 {$hash(before) eq \ {b48ba8e2d0b498321dfd13de84867effda399af5}} ############################################################################### fossil unversioned edit unversioned4.txt test unversioned-28 {[normalize_result] eq {}} ############################################################################### fossil unversioned cat unversioned4.txt set hash(after) [::sha1::sha1 $RESULT] test unversioned-29 {$hash(after) ne $hash(before)} test unversioned-30 {[regexp { \d+ (?:-)?\d+$} $RESULT]} ############################################################################### fossil unversioned edit unversioned4.txt --mtime 2016-10-01 test unversioned-31 {[normalize_result] eq {}} ############################################################################### fossil unversioned cat unversioned4.txt test unversioned-32 {[regexp { \d+ (?:-)?\d+ \d+ (?:-)?\d+$} $RESULT]} ############################################################################### fossil unversioned list test unversioned-33 {[regexp \ {^[0-9a-f]{12} 2016-10-01 00:00:00 30 30\ unversioned2\.txt [0-9a-f]{12} 2016-10-01 00:00:00 \d+ \d+\ unversioned4\.txt$} [normalize_result]]} ############################################################################### fossil unversioned export unversioned2.txt unversioned2-ex.txt test unversioned-34 {[normalize_result] eq {}} test unversioned-35 {[::sha1::sha1 -hex -filename unversioned2-ex.txt] eq \ {962f96ebd613e4fdd9aa2d20bd9fe21a64e925f2}} ############################################################################### fossil unversioned hash test unversioned-36 {[regexp {^[0-9a-f]{40}$} [normalize_result]]} ############################################################################### fossil unversioned hash --debug test unversioned-37 {[regexp \ {^unversioned2\.txt 2016-10-01 00:00:00 [0-9a-f]{40} unversioned4\.txt 2016-10-01 00:00:00 [0-9a-f]{40} [0-9a-f]{40}$} [normalize_result]]} ############################################################################### fossil unversioned remove unversioned4.txt --mtime "2016-10-02 13:47:29" test unversioned-38 {[normalize_result] eq {}} ############################################################################### fossil unversioned list --all test unversioned-39 {[regexp \ {^\(deleted\) \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} 0 0\ unversioned1\.txt [0-9a-f]{12} 2016-10-01 00:00:00 30 30 unversioned2\.txt \(deleted\) \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} 0 0\ unversioned3\.txt \(deleted\) 2016-10-02 13:47:29 0 0 unversioned4\.txt$} \ [normalize_result]]} ############################################################################### fossil unversioned touch unversioned1.txt --mtime "2016-10-03 23:01:44" test unversioned-40 {[normalize_result] eq {}} ############################################################################### fossil unversioned list --all test unversioned-41 {[regexp \ {^\(deleted\) 2016-10-03 23:01:44 0 0\ unversioned1\.txt [0-9a-f]{12} 2016-10-01 00:00:00 30 30 unversioned2\.txt \(deleted\) \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} 0 0\ unversioned3\.txt \(deleted\) 2016-10-02 13:47:29 0 0 unversioned4\.txt$} \ [normalize_result]]} ############################################################################### fossil unversioned add unversioned5.txt test unversioned-42 {[normalize_result] eq {}} ############################################################################### fossil unversioned touch unversioned5.txt test unversioned-43 {[normalize_result] eq {}} ############################################################################### fossil unversioned list test unversioned-44 {[regexp \ {^[0-9a-f]{12} 2016-10-01 00:00:00 30 30 unversioned2\.txt [0-9a-f]{12} \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} 28 28\ unversioned5\.txt$} [normalize_result]]} ############################################################################### set password [string trim [clock seconds] -] fossil user new uvtester "Unversioned Test User" $password fossil user capabilities uvtester oy ############################################################################### foreach {pid port outTmpFile} [test_start_server $repository stopArg] {} puts [appendArgs "Started Fossil server, pid \"" $pid \" ", port \"" $port \".] set remote [appendArgs http://uvtester: $password @localhost: $port /] ############################################################################### set clientDir [file join $tempPath [appendArgs \ uvtest_ [string trim [clock seconds] -] _ [getSeqNo]]] set savedPwd [pwd] file mkdir $clientDir; cd $clientDir puts [appendArgs "Now in client directory \"" [pwd] \".] write_file unversioned-client1.txt "This is unversioned client file #1." ############################################################################### fossil_maybe_answer y clone $remote uvrepo.fossil fossil open uvrepo.fossil ############################################################################### fossil unversioned list test unversioned-45 {[normalize_result] eq {}} ############################################################################### fossil_maybe_answer y unversioned sync $remote test unversioned-46 {[regexp \ {Round-trips: 1 Artifacts sent: 0 received: 0 Round-trips: 1 Artifacts sent: 0 received: 0 Round-trips: 2 Artifacts sent: 0 received: 0 Round-trips: 2 Artifacts sent: 0 received: 2 \n? done, sent: \d+ received: \d+ ip: 127.0.0.1} [normalize_result]]} ############################################################################### fossil unversioned ls test unversioned-47 {[normalize_result] eq {unversioned2.txt unversioned5.txt}} ############################################################################### set env(FAKE_EDITOR_SCRIPT) "append data this_is_a_test"; # deterministic fossil unversioned edit unversioned2.txt test unversioned-48 {[normalize_result] eq {}} unset env(FAKE_EDITOR_SCRIPT) ############################################################################### fossil unversioned cat unversioned2.txt test unversioned-49 {[::sha1::sha1 $RESULT] eq \ {e15d4b576fc04e3bb5e44a33d44d104dd5b19428}} ############################################################################### fossil unversioned remove unversioned5.txt test unversioned-50 {[normalize_result] eq {}} ############################################################################### fossil unversioned list --all test unversioned-51 {[regexp \ {^[0-9a-f]{12} \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} 44 44\ unversioned2\.txt \(deleted\) \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} 0 0\ unversioned5\.txt$} [normalize_result]]} ############################################################################### fossil_maybe_answer y unversioned revert $remote test unversioned-52 {[regexp \ {Round-trips: 1 Artifacts sent: 0 received: 0 Round-trips: 1 Artifacts sent: 0 received: 0 Round-trips: 2 Artifacts sent: 0 received: 0 Round-trips: 2 Artifacts sent: 0 received: 2 \n? done, sent: \d+ received: \d+ ip: 127.0.0.1} [normalize_result]]} ############################################################################### fossil unversioned list test unversioned-53 {[regexp \ {^[0-9a-f]{12} 2016-10-01 00:00:00 30 30\ unversioned2\.txt [0-9a-f]{12} \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} 28 28\ unversioned5\.txt$} [normalize_result]]} ############################################################################### fossil unversioned add unversioned-client1.txt test unversioned-54 {[normalize_result] eq {}} ############################################################################### fossil_maybe_answer y unversioned sync $remote test unversioned-55 {[regexp \ {Round-trips: 1 Artifacts sent: 0 received: 0 Round-trips: 1 Artifacts sent: 0 received: 0 Round-trips: 2 Artifacts sent: 1 received: 0 Round-trips: 2 Artifacts sent: 1 received: 0 \n? done, sent: \d+ received: \d+ ip: 127.0.0.1} [normalize_result]]} ############################################################################### fossil close test unversioned-56 {[normalize_result] eq {}} ############################################################################### cd $savedPwd; unset savedPwd file delete -force $clientDir puts [appendArgs "Now in server directory \"" [pwd] \".] ############################################################################### set stopped [test_stop_server $stopArg $pid $outTmpFile] puts [appendArgs \ [expr {$stopped ? "Stopped" : "Could not stop"}] \ " Fossil server, pid \"" $pid "\", using argument \"" \ $stopArg \".] ############################################################################### fossil unversioned list test unversioned-57 {[regexp \ {^[0-9a-f]{12} \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} 35 35\ unversioned-client1\.txt [0-9a-f]{12} 2016-10-01 00:00:00 30 30 unversioned2\.txt [0-9a-f]{12} \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} 28 28\ unversioned5\.txt$} [normalize_result]]} ############################################################################### fossil unversioned cat unversioned-client1.txt test unversioned-58 {[::sha1::sha1 $RESULT] eq \ {a34606f714afe309bb531fba6051eaf25201e8a2}} ############################################################################### test_cleanup |
Changes to test/utf.test.
more than 10,000 changes
Changes to test/wiki.test.
︙ | ︙ | |||
122 123 124 125 126 127 128 | ############################################################################### # Trying to add a technote with the same timestamp should succeed and create a # second tech note fossil wiki create 2ndnote f3 -technote {2016-01-01 12:34} test wiki-13 {$CODE == 0} fossil wiki list --technote | | | 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 | ############################################################################### # Trying to add a technote with the same timestamp should succeed and create a # second tech note fossil wiki create 2ndnote f3 -technote {2016-01-01 12:34} test wiki-13 {$CODE == 0} fossil wiki list --technote set technotelist [split $RESULT "\n"] test wiki-13.1 {[llength $technotelist] == 2} ############################################################################### # commiting a change to an existing technote should replace the page on export # (this should update the tech note from wiki-13 as that the most recently # updated one, that should also be the one exported by the export command) write_file f4 "technote 2nd variant" |
︙ | ︙ | |||
145 146 147 148 149 150 151 | ############################################################################### # But we shouldn't be able to update non-existant pages fossil wiki commit doesntexist f1 -expectError test wiki-16 {$CODE != 0} ############################################################################### | | | 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 | ############################################################################### # But we shouldn't be able to update non-existant pages fossil wiki commit doesntexist f1 -expectError test wiki-16 {$CODE != 0} ############################################################################### # Check specifying tags for a technote is OK write_file f5 "technote with tags" fossil wiki create {tagged technote} f5 --technote {2016-01-02 12:34} --technote-tags {A B} test wiki-17 {$CODE == 0} write_file f5.1 "editted and tagged technote" fossil wiki commit {tagged technote} f5 --technote {2016-01-02 12:34} --technote-tags {C D} test wiki-18 {$CODE == 0} |
︙ | ︙ | |||
206 207 208 209 210 211 212 | write_file f8 "Contents of a 'unique' tech note" fossil wiki create {Unique technote} f8 --technote {2016-01-05 01:02:03} fossil timeline test wiki-30 {[string match *Unique*technote* $RESULT]} ############################################################################### # Check for a collision between an attachment and a note, this was a | | | | | 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 | write_file f8 "Contents of a 'unique' tech note" fossil wiki create {Unique technote} f8 --technote {2016-01-05 01:02:03} fossil timeline test wiki-30 {[string match *Unique*technote* $RESULT]} ############################################################################### # Check for a collision between an attachment and a note, this was a # bug that resulted from some code treating the attachment entry as if it # were a technote when it isn't really. # # First, wait for the top of the next second so the attachment # happens at a known time, then add an attachment to an existing note # and a new note immediately after. set t0 [clock seconds] while {$t0 == [clock seconds]} { after 100 } set t1 [clock format [clock seconds] -gmt 1 -format "%Y-%m-%d %H:%M:%S"] write_file f9 "Timestamp: $t1" |
︙ | ︙ | |||
240 241 242 243 244 245 246 | # "now" is a valid stamp. set t2 [clock format [clock seconds] -gmt 1 -format "%Y-%m-%d %H:%M:%S"] write_file f10 "Even unstampted notes are delivered.\nStamped $t2" fossil wiki create "Unstamped Note" f10 --technote -expectError test wiki-33 {$CODE != 0} fossil wiki create "Unstamped Note" f10 --technote now test wiki-34 {$CODE == 0} | | | | | | | | 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 | # "now" is a valid stamp. set t2 [clock format [clock seconds] -gmt 1 -format "%Y-%m-%d %H:%M:%S"] write_file f10 "Even unstampted notes are delivered.\nStamped $t2" fossil wiki create "Unstamped Note" f10 --technote -expectError test wiki-33 {$CODE != 0} fossil wiki create "Unstamped Note" f10 --technote now test wiki-34 {$CODE == 0} fossil wiki list -t test wiki-35 {[string match "*$t2*" $RESULT]} ############################################################################### # Check an attachment to it in the same second works. write_file f11 "Time Stamp was $t2" fossil attachment add f11 --technote $t2 test wiki-36 {$CODE == 0} fossil timeline test wiki-36-1 {$CODE == 0} fossil wiki list -t test wiki-36-2 {$CODE == 0} ############################################################################### # Check that we have the expected number of tech notes on the list (and not # extra ones from other events (such as the attachments) - 8 tech notes # expected created by tests 9, 13, 17, 19, 29, 31, 32 and 34 fossil wiki list --technote set technotelist [split $RESULT "\n"] test wiki-37 {[llength $technotelist] == 8} ############################################################################### # Check that using the show-technote-ids shows the same tech notes in the same # order (with the technote id as the first word of the line) fossil wiki list --technote --show-technote-ids set technoteidlist [split $RESULT "\n"] |
︙ | ︙ | |||
287 288 289 290 291 292 293 | set anoldtechnoteid [lindex [split [lindex $technotelist [llength $technotelist]-1]] 0] fossil wiki export a12 --technote $anoldtechnoteid test wiki-40 {[similar_file f12 a12]} ############################################################################### # Also check that we can specify a prefix of the tech note id (note: with # 9 items in the tech note at this point there is a chance of a collision. | | | | | 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 | set anoldtechnoteid [lindex [split [lindex $technotelist [llength $technotelist]-1]] 0] fossil wiki export a12 --technote $anoldtechnoteid test wiki-40 {[similar_file f12 a12]} ############################################################################### # Also check that we can specify a prefix of the tech note id (note: with # 9 items in the tech note at this point there is a chance of a collision. # However with a 20 character prefix the chance of the collision is # approximately 1 in 10^22 so this test ignores that possibility.) fossil wiki export a12.1 --technote [string range $anoldtechnoteid 0 20] test wiki-41 {[similar_file f12 a12.1]} ############################################################################### # Now we need to force a collision in the first four characters of the tech # note id if we don't already have one so we can check we get an error if the # tech note id is ambiguous set idcounts [dict create] set maxcount 0 fossil wiki list --technote --show-technote-ids set technotelist [split $RESULT "\n"] for {set i 0} {$i < [llength $technotelist]} {incr i} { set fullid [lindex $technotelist $i] set id [string range $fullid 0 3] dict incr idcounts $id if {[dict get $idcounts $id] > $maxcount} { set maxid $id incr maxcount } } # get i so that, as a julian date, it is in the 1800s, i.e., older than # any other tech note, but after 1 AD set i 2400000 while {$maxcount < 2} { # keep getting older incr i -1 write_file f13 "A tech note with timestamp of jday=$i" fossil wiki create "timestamp of $i" f13 --technote "$i" fossil wiki list --technote --show-technote-ids set technotelist [split $RESULT "\n"] set oldesttechnoteid [lindex [split [lindex $technotelist [llength $technotelist]-1]] 0] set id [string range $oldesttechnoteid 0 3] dict incr idcounts $id if {[dict get $idcounts $id] > $maxcount} { set maxid $id incr maxcount } } # Save the duplicate id for this and later tests set duplicateid $maxid fossil wiki export a13 --technote $duplicateid -expectError test wiki-42 {$CODE != 0} |
︙ | ︙ |
Changes to win/Makefile.PellesCGMake.
︙ | ︙ | |||
81 82 83 84 85 86 87 | UTILS_OBJ=$(UTILS:.exe=.obj) UTILS_SRC=$(foreach uf,$(UTILS),$(SRCDIR)$(uf:.exe=.c)) # define the SQLite files, which need special flags on compile SQLITESRC=sqlite3.c ORIGSQLITESRC=$(foreach sf,$(SQLITESRC),$(SRCDIR)$(sf)) SQLITEOBJ=$(foreach sf,$(SQLITESRC),$(sf:.c=.obj)) | | | 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 | UTILS_OBJ=$(UTILS:.exe=.obj) UTILS_SRC=$(foreach uf,$(UTILS),$(SRCDIR)$(uf:.exe=.c)) # define the SQLite files, which need special flags on compile SQLITESRC=sqlite3.c ORIGSQLITESRC=$(foreach sf,$(SQLITESRC),$(SRCDIR)$(sf)) SQLITEOBJ=$(foreach sf,$(SQLITESRC),$(sf:.c=.obj)) SQLITEDEFINES=-DNDEBUG=1 -DSQLITE_THREADSAFE=0 -DSQLITE_DEFAULT_MEMSTATUS=0 -DSQLITE_DEFAULT_WAL_SYNCHRONOUS=1 -DSQLITE_LIKE_DOESNT_MATCH_BLOBS -DSQLITE_OMIT_DECLTYPE -DSQLITE_OMIT_DEPRECATED -DSQLITE_OMIT_PROGRESS_CALLBACK -DSQLITE_OMIT_SHARED_CACHE -DSQLITE_OMIT_LOAD_EXTENSION -DSQLITE_MAX_EXPR_DEPTH=0 -DSQLITE_USE_ALLOCA -DSQLITE_ENABLE_LOCKING_STYLE=0 -DSQLITE_DEFAULT_FILE_FORMAT=4 -DSQLITE_ENABLE_EXPLAIN_COMMENTS -DSQLITE_ENABLE_FTS4 -DSQLITE_ENABLE_FTS3_PARENTHESIS -DSQLITE_ENABLE_DBSTAT_VTAB -DSQLITE_ENABLE_JSON1 -DSQLITE_ENABLE_FTS5 -DSQLITE_WIN32_NO_ANSI # define the SQLite shell files, which need special flags on compile SQLITESHELLSRC=shell.c ORIGSQLITESHELLSRC=$(foreach sf,$(SQLITESHELLSRC),$(SRCDIR)$(sf)) SQLITESHELLOBJ=$(foreach sf,$(SQLITESHELLSRC),$(sf:.c=.obj)) SQLITESHELLDEFINES=-Dmain=sqlite3_shell -DSQLITE_SHELL_IS_UTF8=1 -DSQLITE_OMIT_LOAD_EXTENSION=1 -DUSE_SYSTEM_SQLITE=$(USE_SYSTEM_SQLITE) -DSQLITE_SHELL_DBNAME_PROC=fossil_open -Daccess=file_access -Dsystem=fossil_system -Dgetenv=fossil_getenv -Dfopen=fossil_fopen |
︙ | ︙ |
Changes to win/Makefile.dmc.
︙ | ︙ | |||
22 23 24 25 26 27 28 | SSL = CFLAGS = -o BCC = $(DMDIR)\bin\dmc $(CFLAGS) TCC = $(DMDIR)\bin\dmc $(CFLAGS) $(DMCDEF) $(SSL) $(INCL) LIBS = $(DMDIR)\extra\lib\ zlib wsock32 advapi32 | | | | | | 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 | SSL = CFLAGS = -o BCC = $(DMDIR)\bin\dmc $(CFLAGS) TCC = $(DMDIR)\bin\dmc $(CFLAGS) $(DMCDEF) $(SSL) $(INCL) LIBS = $(DMDIR)\extra\lib\ zlib wsock32 advapi32 SQLITE_OPTIONS = -DNDEBUG=1 -DSQLITE_THREADSAFE=0 -DSQLITE_DEFAULT_MEMSTATUS=0 -DSQLITE_DEFAULT_WAL_SYNCHRONOUS=1 -DSQLITE_LIKE_DOESNT_MATCH_BLOBS -DSQLITE_OMIT_DECLTYPE -DSQLITE_OMIT_DEPRECATED -DSQLITE_OMIT_PROGRESS_CALLBACK -DSQLITE_OMIT_SHARED_CACHE -DSQLITE_OMIT_LOAD_EXTENSION -DSQLITE_MAX_EXPR_DEPTH=0 -DSQLITE_USE_ALLOCA -DSQLITE_ENABLE_LOCKING_STYLE=0 -DSQLITE_DEFAULT_FILE_FORMAT=4 -DSQLITE_ENABLE_EXPLAIN_COMMENTS -DSQLITE_ENABLE_FTS4 -DSQLITE_ENABLE_FTS3_PARENTHESIS -DSQLITE_ENABLE_DBSTAT_VTAB -DSQLITE_ENABLE_JSON1 -DSQLITE_ENABLE_FTS5 SHELL_OPTIONS = -Dmain=sqlite3_shell -DSQLITE_SHELL_IS_UTF8=1 -DSQLITE_OMIT_LOAD_EXTENSION=1 -DUSE_SYSTEM_SQLITE=$(USE_SYSTEM_SQLITE) -DSQLITE_SHELL_DBNAME_PROC=fossil_open -Daccess=file_access -Dsystem=fossil_system -Dgetenv=fossil_getenv -Dfopen=fossil_fopen SRC = add_.c allrepo_.c attach_.c bag_.c bisect_.c blob_.c branch_.c browse_.c builtin_.c bundle_.c cache_.c captcha_.c cgi_.c checkin_.c checkout_.c clearsign_.c clone_.c comformat_.c configure_.c content_.c db_.c delta_.c deltacmd_.c descendants_.c diff_.c diffcmd_.c dispatch_.c doc_.c encode_.c event_.c export_.c file_.c finfo_.c foci_.c fshell_.c fusefs_.c glob_.c graph_.c gzip_.c http_.c http_socket_.c http_ssl_.c http_transport_.c import_.c info_.c json_.c json_artifact_.c json_branch_.c json_config_.c json_diff_.c json_dir_.c json_finfo_.c json_login_.c json_query_.c json_report_.c json_status_.c json_tag_.c json_timeline_.c json_user_.c json_wiki_.c leaf_.c loadctrl_.c login_.c lookslike_.c main_.c manifest_.c markdown_.c markdown_html_.c md5_.c merge_.c merge3_.c moderate_.c name_.c path_.c piechart_.c pivot_.c popen_.c pqueue_.c printf_.c publish_.c purge_.c rebuild_.c regexp_.c report_.c rss_.c schema_.c search_.c setup_.c sha1_.c shun_.c sitemap_.c skins_.c sqlcmd_.c stash_.c stat_.c statrep_.c style_.c sync_.c tag_.c tar_.c th_main_.c timeline_.c tkt_.c tktsetup_.c undo_.c unicode_.c unversioned_.c update_.c url_.c user_.c utf8_.c util_.c verify_.c vfile_.c wiki_.c wikiformat_.c winfile_.c winhttp_.c wysiwyg_.c xfer_.c xfersetup_.c zip_.c OBJ = $(OBJDIR)\add$O $(OBJDIR)\allrepo$O $(OBJDIR)\attach$O $(OBJDIR)\bag$O $(OBJDIR)\bisect$O $(OBJDIR)\blob$O $(OBJDIR)\branch$O $(OBJDIR)\browse$O $(OBJDIR)\builtin$O $(OBJDIR)\bundle$O $(OBJDIR)\cache$O $(OBJDIR)\captcha$O $(OBJDIR)\cgi$O $(OBJDIR)\checkin$O $(OBJDIR)\checkout$O $(OBJDIR)\clearsign$O $(OBJDIR)\clone$O $(OBJDIR)\comformat$O $(OBJDIR)\configure$O $(OBJDIR)\content$O $(OBJDIR)\db$O $(OBJDIR)\delta$O $(OBJDIR)\deltacmd$O $(OBJDIR)\descendants$O $(OBJDIR)\diff$O $(OBJDIR)\diffcmd$O $(OBJDIR)\dispatch$O $(OBJDIR)\doc$O $(OBJDIR)\encode$O $(OBJDIR)\event$O $(OBJDIR)\export$O $(OBJDIR)\file$O $(OBJDIR)\finfo$O $(OBJDIR)\foci$O $(OBJDIR)\fshell$O $(OBJDIR)\fusefs$O $(OBJDIR)\glob$O $(OBJDIR)\graph$O $(OBJDIR)\gzip$O $(OBJDIR)\http$O $(OBJDIR)\http_socket$O $(OBJDIR)\http_ssl$O $(OBJDIR)\http_transport$O $(OBJDIR)\import$O $(OBJDIR)\info$O $(OBJDIR)\json$O $(OBJDIR)\json_artifact$O $(OBJDIR)\json_branch$O $(OBJDIR)\json_config$O $(OBJDIR)\json_diff$O $(OBJDIR)\json_dir$O $(OBJDIR)\json_finfo$O $(OBJDIR)\json_login$O $(OBJDIR)\json_query$O $(OBJDIR)\json_report$O $(OBJDIR)\json_status$O $(OBJDIR)\json_tag$O $(OBJDIR)\json_timeline$O $(OBJDIR)\json_user$O $(OBJDIR)\json_wiki$O $(OBJDIR)\leaf$O $(OBJDIR)\loadctrl$O $(OBJDIR)\login$O $(OBJDIR)\lookslike$O $(OBJDIR)\main$O $(OBJDIR)\manifest$O $(OBJDIR)\markdown$O $(OBJDIR)\markdown_html$O $(OBJDIR)\md5$O $(OBJDIR)\merge$O $(OBJDIR)\merge3$O $(OBJDIR)\moderate$O $(OBJDIR)\name$O $(OBJDIR)\path$O $(OBJDIR)\piechart$O $(OBJDIR)\pivot$O $(OBJDIR)\popen$O $(OBJDIR)\pqueue$O $(OBJDIR)\printf$O $(OBJDIR)\publish$O $(OBJDIR)\purge$O $(OBJDIR)\rebuild$O $(OBJDIR)\regexp$O $(OBJDIR)\report$O $(OBJDIR)\rss$O $(OBJDIR)\schema$O $(OBJDIR)\search$O $(OBJDIR)\setup$O $(OBJDIR)\sha1$O $(OBJDIR)\shun$O $(OBJDIR)\sitemap$O $(OBJDIR)\skins$O $(OBJDIR)\sqlcmd$O $(OBJDIR)\stash$O $(OBJDIR)\stat$O $(OBJDIR)\statrep$O $(OBJDIR)\style$O $(OBJDIR)\sync$O $(OBJDIR)\tag$O $(OBJDIR)\tar$O $(OBJDIR)\th_main$O $(OBJDIR)\timeline$O $(OBJDIR)\tkt$O $(OBJDIR)\tktsetup$O $(OBJDIR)\undo$O $(OBJDIR)\unicode$O $(OBJDIR)\unversioned$O $(OBJDIR)\update$O $(OBJDIR)\url$O $(OBJDIR)\user$O $(OBJDIR)\utf8$O $(OBJDIR)\util$O $(OBJDIR)\verify$O $(OBJDIR)\vfile$O $(OBJDIR)\wiki$O $(OBJDIR)\wikiformat$O $(OBJDIR)\winfile$O $(OBJDIR)\winhttp$O $(OBJDIR)\wysiwyg$O $(OBJDIR)\xfer$O $(OBJDIR)\xfersetup$O $(OBJDIR)\zip$O $(OBJDIR)\shell$O $(OBJDIR)\sqlite3$O $(OBJDIR)\th$O $(OBJDIR)\th_lang$O RC=$(DMDIR)\bin\rcc RCFLAGS=-32 -w1 -I$(SRCDIR) /D__DMC__ APPNAME = $(OBJDIR)\fossil$(E) all: $(APPNAME) $(APPNAME) : translate$E mkindex$E codecheck1$E headers $(OBJ) $(OBJDIR)\link cd $(OBJDIR) codecheck1$E $(SRC) $(DMDIR)\bin\link @link $(OBJDIR)\fossil.res: $B\win\fossil.rc $(RC) $(RCFLAGS) -o$@ $** $(OBJDIR)\link: $B\win\Makefile.dmc $(OBJDIR)\fossil.res +echo add allrepo attach bag bisect blob branch browse builtin bundle cache captcha cgi checkin checkout clearsign clone comformat configure content db delta deltacmd descendants diff diffcmd dispatch doc encode event export file finfo foci fshell fusefs glob graph gzip http http_socket http_ssl http_transport import info json json_artifact json_branch json_config json_diff json_dir json_finfo json_login json_query json_report json_status json_tag json_timeline json_user json_wiki leaf loadctrl login lookslike main manifest markdown markdown_html md5 merge merge3 moderate name path piechart pivot popen pqueue printf publish purge rebuild regexp report rss schema search setup sha1 shun sitemap skins sqlcmd stash stat statrep style sync tag tar th_main timeline tkt tktsetup undo unicode unversioned update url user utf8 util verify vfile wiki wikiformat winfile winhttp wysiwyg xfer xfersetup zip shell sqlite3 th th_lang > $@ +echo fossil >> $@ +echo fossil >> $@ +echo $(LIBS) >> $@ +echo. >> $@ +echo fossil >> $@ translate$E: $(SRCDIR)\translate.c |
︙ | ︙ | |||
276 277 278 279 280 281 282 283 284 285 286 287 288 289 | +translate$E $** > $@ $(OBJDIR)\diffcmd$O : diffcmd_.c diffcmd.h $(TCC) -o$@ -c diffcmd_.c diffcmd_.c : $(SRCDIR)\diffcmd.c +translate$E $** > $@ $(OBJDIR)\doc$O : doc_.c doc.h $(TCC) -o$@ -c doc_.c doc_.c : $(SRCDIR)\doc.c +translate$E $** > $@ | > > > > > > | 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 | +translate$E $** > $@ $(OBJDIR)\diffcmd$O : diffcmd_.c diffcmd.h $(TCC) -o$@ -c diffcmd_.c diffcmd_.c : $(SRCDIR)\diffcmd.c +translate$E $** > $@ $(OBJDIR)\dispatch$O : dispatch_.c dispatch.h $(TCC) -o$@ -c dispatch_.c dispatch_.c : $(SRCDIR)\dispatch.c +translate$E $** > $@ $(OBJDIR)\doc$O : doc_.c doc.h $(TCC) -o$@ -c doc_.c doc_.c : $(SRCDIR)\doc.c +translate$E $** > $@ |
︙ | ︙ | |||
318 319 320 321 322 323 324 325 326 327 328 329 330 331 | +translate$E $** > $@ $(OBJDIR)\foci$O : foci_.c foci.h $(TCC) -o$@ -c foci_.c foci_.c : $(SRCDIR)\foci.c +translate$E $** > $@ $(OBJDIR)\fusefs$O : fusefs_.c fusefs.h $(TCC) -o$@ -c fusefs_.c fusefs_.c : $(SRCDIR)\fusefs.c +translate$E $** > $@ | > > > > > > | 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 | +translate$E $** > $@ $(OBJDIR)\foci$O : foci_.c foci.h $(TCC) -o$@ -c foci_.c foci_.c : $(SRCDIR)\foci.c +translate$E $** > $@ $(OBJDIR)\fshell$O : fshell_.c fshell.h $(TCC) -o$@ -c fshell_.c fshell_.c : $(SRCDIR)\fshell.c +translate$E $** > $@ $(OBJDIR)\fusefs$O : fusefs_.c fusefs.h $(TCC) -o$@ -c fusefs_.c fusefs_.c : $(SRCDIR)\fusefs.c +translate$E $** > $@ |
︙ | ︙ | |||
744 745 746 747 748 749 750 751 752 753 754 755 756 757 | +translate$E $** > $@ $(OBJDIR)\unicode$O : unicode_.c unicode.h $(TCC) -o$@ -c unicode_.c unicode_.c : $(SRCDIR)\unicode.c +translate$E $** > $@ $(OBJDIR)\update$O : update_.c update.h $(TCC) -o$@ -c update_.c update_.c : $(SRCDIR)\update.c +translate$E $** > $@ | > > > > > > | 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 | +translate$E $** > $@ $(OBJDIR)\unicode$O : unicode_.c unicode.h $(TCC) -o$@ -c unicode_.c unicode_.c : $(SRCDIR)\unicode.c +translate$E $** > $@ $(OBJDIR)\unversioned$O : unversioned_.c unversioned.h $(TCC) -o$@ -c unversioned_.c unversioned_.c : $(SRCDIR)\unversioned.c +translate$E $** > $@ $(OBJDIR)\update$O : update_.c update.h $(TCC) -o$@ -c update_.c update_.c : $(SRCDIR)\update.c +translate$E $** > $@ |
︙ | ︙ | |||
836 837 838 839 840 841 842 | $(OBJDIR)\zip$O : zip_.c zip.h $(TCC) -o$@ -c zip_.c zip_.c : $(SRCDIR)\zip.c +translate$E $** > $@ headers: makeheaders$E page_index.h builtin_data.h VERSION.h | | | 854 855 856 857 858 859 860 861 862 | $(OBJDIR)\zip$O : zip_.c zip.h $(TCC) -o$@ -c zip_.c zip_.c : $(SRCDIR)\zip.c +translate$E $** > $@ headers: makeheaders$E page_index.h builtin_data.h VERSION.h +makeheaders$E add_.c:add.h allrepo_.c:allrepo.h attach_.c:attach.h bag_.c:bag.h bisect_.c:bisect.h blob_.c:blob.h branch_.c:branch.h browse_.c:browse.h builtin_.c:builtin.h bundle_.c:bundle.h cache_.c:cache.h captcha_.c:captcha.h cgi_.c:cgi.h checkin_.c:checkin.h checkout_.c:checkout.h clearsign_.c:clearsign.h clone_.c:clone.h comformat_.c:comformat.h configure_.c:configure.h content_.c:content.h db_.c:db.h delta_.c:delta.h deltacmd_.c:deltacmd.h descendants_.c:descendants.h diff_.c:diff.h diffcmd_.c:diffcmd.h dispatch_.c:dispatch.h doc_.c:doc.h encode_.c:encode.h event_.c:event.h export_.c:export.h file_.c:file.h finfo_.c:finfo.h foci_.c:foci.h fshell_.c:fshell.h fusefs_.c:fusefs.h glob_.c:glob.h graph_.c:graph.h gzip_.c:gzip.h http_.c:http.h http_socket_.c:http_socket.h http_ssl_.c:http_ssl.h http_transport_.c:http_transport.h import_.c:import.h info_.c:info.h json_.c:json.h json_artifact_.c:json_artifact.h json_branch_.c:json_branch.h json_config_.c:json_config.h json_diff_.c:json_diff.h json_dir_.c:json_dir.h json_finfo_.c:json_finfo.h json_login_.c:json_login.h json_query_.c:json_query.h json_report_.c:json_report.h json_status_.c:json_status.h json_tag_.c:json_tag.h json_timeline_.c:json_timeline.h json_user_.c:json_user.h json_wiki_.c:json_wiki.h leaf_.c:leaf.h loadctrl_.c:loadctrl.h login_.c:login.h lookslike_.c:lookslike.h main_.c:main.h manifest_.c:manifest.h markdown_.c:markdown.h markdown_html_.c:markdown_html.h md5_.c:md5.h merge_.c:merge.h merge3_.c:merge3.h moderate_.c:moderate.h name_.c:name.h path_.c:path.h piechart_.c:piechart.h pivot_.c:pivot.h popen_.c:popen.h pqueue_.c:pqueue.h printf_.c:printf.h publish_.c:publish.h purge_.c:purge.h rebuild_.c:rebuild.h regexp_.c:regexp.h report_.c:report.h rss_.c:rss.h schema_.c:schema.h search_.c:search.h setup_.c:setup.h sha1_.c:sha1.h shun_.c:shun.h sitemap_.c:sitemap.h skins_.c:skins.h sqlcmd_.c:sqlcmd.h stash_.c:stash.h stat_.c:stat.h statrep_.c:statrep.h style_.c:style.h sync_.c:sync.h tag_.c:tag.h tar_.c:tar.h th_main_.c:th_main.h timeline_.c:timeline.h tkt_.c:tkt.h tktsetup_.c:tktsetup.h undo_.c:undo.h unicode_.c:unicode.h unversioned_.c:unversioned.h update_.c:update.h url_.c:url.h user_.c:user.h utf8_.c:utf8.h util_.c:util.h verify_.c:verify.h vfile_.c:vfile.h wiki_.c:wiki.h wikiformat_.c:wikiformat.h winfile_.c:winfile.h winhttp_.c:winhttp.h wysiwyg_.c:wysiwyg.h xfer_.c:xfer.h xfersetup_.c:xfersetup.h zip_.c:zip.h $(SRCDIR)\sqlite3.h $(SRCDIR)\th.h VERSION.h $(SRCDIR)\cson_amalgamation.h @copy /Y nul: headers |
Changes to win/Makefile.mingw.
︙ | ︙ | |||
30 31 32 33 34 35 36 37 38 39 40 41 42 | # the following to point from the build directory to the src/ folder. # SRCDIR = src #### The directory into which object code files should be written. # OBJDIR = wbld #### C Compiler and options for use in building executables that # will run on the platform that is doing the build. This is used # to compile code-generator programs as part of the build process. # See TCC below for the C compiler for building the finished binary. # | > > > > > > > > | | 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 | # the following to point from the build directory to the src/ folder. # SRCDIR = src #### The directory into which object code files should be written. # OBJDIR = wbld #### C compiler for use in building executables that will run on # the platform that is doing the build. This is used to compile # code-generator programs as part of the build process. See TCC # and TCCEXE below for the C compiler for building the finished # binary. # BCCEXE = gcc #### C Compiler and options for use in building executables that # will run on the platform that is doing the build. This is used # to compile code-generator programs as part of the build process. # See TCC below for the C compiler for building the finished binary. # BCC = $(BCCEXE) #### Enable compiling with debug symbols (much larger binary) # # FOSSIL_ENABLE_SYMBOLS = 1 #### Enable JSON (http://www.json.org) support using "cson" # |
︙ | ︙ | |||
132 133 134 135 136 137 138 | # used, taking into account whether zlib is actually enabled and the target # processor architecture. # ifndef X64 SSLCONFIG = mingw ifndef FOSSIL_ENABLE_MINIZ ZLIBCONFIG = LOC="-DASMV -DASMINF" OBJA="inffas86.o match.o" | | | | | | 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 | # used, taking into account whether zlib is actually enabled and the target # processor architecture. # ifndef X64 SSLCONFIG = mingw ifndef FOSSIL_ENABLE_MINIZ ZLIBCONFIG = LOC="-DASMV -DASMINF" OBJA="inffas86.o match.o" ZLIBTARGETS = $(ZLIBDIR)/inffas86.o $(ZLIBDIR)/match.o else ZLIBCONFIG = ZLIBTARGETS = endif else SSLCONFIG = mingw64 ZLIBCONFIG = ZLIBTARGETS = endif #### Disable creation of the OpenSSL shared libraries. Also, disable support # for both SSLv2 and SSLv3 (i.e. thereby forcing the use of TLS). # SSLCONFIG += no-ssl2 no-ssl3 no-shared #### When using zlib, make sure that OpenSSL is configured to use the zlib # that Fossil knows about (i.e. the one within the source tree). # ifndef FOSSIL_ENABLE_MINIZ SSLCONFIG += --with-zlib-lib=$(PWD)/$(ZLIBDIR) --with-zlib-include=$(PWD)/$(ZLIBDIR) zlib endif #### The directories where the OpenSSL include and library files are located. # The recommended usage here is to use the Sysinternals junction tool # to create a hard link between an "openssl-1.x" sub-directory of the # Fossil source code directory and the target OpenSSL source directory. # OPENSSLDIR = $(SRCDIR)/../compat/openssl-1.0.2j OPENSSLINCDIR = $(OPENSSLDIR)/include OPENSSLLIBDIR = $(OPENSSLDIR) #### Either the directory where the Tcl library is installed or the Tcl # source code directory resides (depending on the value of the macro # FOSSIL_TCL_SOURCE). If this points to the Tcl install directory, # this directory must have "include" and "lib" sub-directories. If |
︙ | ︙ | |||
201 202 203 204 205 206 207 | endif TCLTARGET = libtclstub86.a else LIBTCL = -ltcl86 TCLTARGET = binaries endif | > > > > > > > > | | | | 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 | endif TCLTARGET = libtclstub86.a else LIBTCL = -ltcl86 TCLTARGET = binaries endif #### C compiler for use in building executables that will run on the # target platform. This is usually the same as BCCEXE, unless you # are cross-compiling. This C compiler builds the finished binary # for fossil. See BCC and BCCEXE above for the C compiler for # building intermediate code-generator tools. # TCCEXE = gcc #### C compiler and options for use in building executables that will # run on the target platform. This is usually the almost the same # as BCC, unless you are cross-compiling. This C compiler builds # the finished binary for fossil. The BCC compiler above is used # for building intermediate code-generator tools. # TCC = $(PREFIX)$(TCCEXE) -Wall #### Add the necessary command line options to build with debugging # symbols, if enabled. # ifdef FOSSIL_ENABLE_SYMBOLS TCC += -g else |
︙ | ︙ | |||
346 347 348 349 350 351 352 | ifdef USE_SYSTEM_SQLITE LIB += -lsqlite3 endif #### OpenSSL: Add the necessary libraries required, if enabled. # ifdef FOSSIL_ENABLE_SSL | | | 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 | ifdef USE_SYSTEM_SQLITE LIB += -lsqlite3 endif #### OpenSSL: Add the necessary libraries required, if enabled. # ifdef FOSSIL_ENABLE_SSL LIB += -lssl -lcrypto -lgdi32 -lcrypt32 endif #### Tcl: Add the necessary libraries required, if enabled. # ifdef FOSSIL_ENABLE_TCL LIB += $(LIBTCL) endif |
︙ | ︙ | |||
429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 | $(SRCDIR)/content.c \ $(SRCDIR)/db.c \ $(SRCDIR)/delta.c \ $(SRCDIR)/deltacmd.c \ $(SRCDIR)/descendants.c \ $(SRCDIR)/diff.c \ $(SRCDIR)/diffcmd.c \ $(SRCDIR)/doc.c \ $(SRCDIR)/encode.c \ $(SRCDIR)/event.c \ $(SRCDIR)/export.c \ $(SRCDIR)/file.c \ $(SRCDIR)/finfo.c \ $(SRCDIR)/foci.c \ $(SRCDIR)/fusefs.c \ $(SRCDIR)/glob.c \ $(SRCDIR)/graph.c \ $(SRCDIR)/gzip.c \ $(SRCDIR)/http.c \ $(SRCDIR)/http_socket.c \ $(SRCDIR)/http_ssl.c \ | > > | 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 | $(SRCDIR)/content.c \ $(SRCDIR)/db.c \ $(SRCDIR)/delta.c \ $(SRCDIR)/deltacmd.c \ $(SRCDIR)/descendants.c \ $(SRCDIR)/diff.c \ $(SRCDIR)/diffcmd.c \ $(SRCDIR)/dispatch.c \ $(SRCDIR)/doc.c \ $(SRCDIR)/encode.c \ $(SRCDIR)/event.c \ $(SRCDIR)/export.c \ $(SRCDIR)/file.c \ $(SRCDIR)/finfo.c \ $(SRCDIR)/foci.c \ $(SRCDIR)/fshell.c \ $(SRCDIR)/fusefs.c \ $(SRCDIR)/glob.c \ $(SRCDIR)/graph.c \ $(SRCDIR)/gzip.c \ $(SRCDIR)/http.c \ $(SRCDIR)/http_socket.c \ $(SRCDIR)/http_ssl.c \ |
︙ | ︙ | |||
507 508 509 510 511 512 513 514 515 516 517 518 519 520 | $(SRCDIR)/tar.c \ $(SRCDIR)/th_main.c \ $(SRCDIR)/timeline.c \ $(SRCDIR)/tkt.c \ $(SRCDIR)/tktsetup.c \ $(SRCDIR)/undo.c \ $(SRCDIR)/unicode.c \ $(SRCDIR)/update.c \ $(SRCDIR)/url.c \ $(SRCDIR)/user.c \ $(SRCDIR)/utf8.c \ $(SRCDIR)/util.c \ $(SRCDIR)/verify.c \ $(SRCDIR)/vfile.c \ | > | 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 | $(SRCDIR)/tar.c \ $(SRCDIR)/th_main.c \ $(SRCDIR)/timeline.c \ $(SRCDIR)/tkt.c \ $(SRCDIR)/tktsetup.c \ $(SRCDIR)/undo.c \ $(SRCDIR)/unicode.c \ $(SRCDIR)/unversioned.c \ $(SRCDIR)/update.c \ $(SRCDIR)/url.c \ $(SRCDIR)/user.c \ $(SRCDIR)/utf8.c \ $(SRCDIR)/util.c \ $(SRCDIR)/verify.c \ $(SRCDIR)/vfile.c \ |
︙ | ︙ | |||
601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 | $(OBJDIR)/content_.c \ $(OBJDIR)/db_.c \ $(OBJDIR)/delta_.c \ $(OBJDIR)/deltacmd_.c \ $(OBJDIR)/descendants_.c \ $(OBJDIR)/diff_.c \ $(OBJDIR)/diffcmd_.c \ $(OBJDIR)/doc_.c \ $(OBJDIR)/encode_.c \ $(OBJDIR)/event_.c \ $(OBJDIR)/export_.c \ $(OBJDIR)/file_.c \ $(OBJDIR)/finfo_.c \ $(OBJDIR)/foci_.c \ $(OBJDIR)/fusefs_.c \ $(OBJDIR)/glob_.c \ $(OBJDIR)/graph_.c \ $(OBJDIR)/gzip_.c \ $(OBJDIR)/http_.c \ $(OBJDIR)/http_socket_.c \ $(OBJDIR)/http_ssl_.c \ | > > | 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 | $(OBJDIR)/content_.c \ $(OBJDIR)/db_.c \ $(OBJDIR)/delta_.c \ $(OBJDIR)/deltacmd_.c \ $(OBJDIR)/descendants_.c \ $(OBJDIR)/diff_.c \ $(OBJDIR)/diffcmd_.c \ $(OBJDIR)/dispatch_.c \ $(OBJDIR)/doc_.c \ $(OBJDIR)/encode_.c \ $(OBJDIR)/event_.c \ $(OBJDIR)/export_.c \ $(OBJDIR)/file_.c \ $(OBJDIR)/finfo_.c \ $(OBJDIR)/foci_.c \ $(OBJDIR)/fshell_.c \ $(OBJDIR)/fusefs_.c \ $(OBJDIR)/glob_.c \ $(OBJDIR)/graph_.c \ $(OBJDIR)/gzip_.c \ $(OBJDIR)/http_.c \ $(OBJDIR)/http_socket_.c \ $(OBJDIR)/http_ssl_.c \ |
︙ | ︙ | |||
679 680 681 682 683 684 685 686 687 688 689 690 691 692 | $(OBJDIR)/tar_.c \ $(OBJDIR)/th_main_.c \ $(OBJDIR)/timeline_.c \ $(OBJDIR)/tkt_.c \ $(OBJDIR)/tktsetup_.c \ $(OBJDIR)/undo_.c \ $(OBJDIR)/unicode_.c \ $(OBJDIR)/update_.c \ $(OBJDIR)/url_.c \ $(OBJDIR)/user_.c \ $(OBJDIR)/utf8_.c \ $(OBJDIR)/util_.c \ $(OBJDIR)/verify_.c \ $(OBJDIR)/vfile_.c \ | > | 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 | $(OBJDIR)/tar_.c \ $(OBJDIR)/th_main_.c \ $(OBJDIR)/timeline_.c \ $(OBJDIR)/tkt_.c \ $(OBJDIR)/tktsetup_.c \ $(OBJDIR)/undo_.c \ $(OBJDIR)/unicode_.c \ $(OBJDIR)/unversioned_.c \ $(OBJDIR)/update_.c \ $(OBJDIR)/url_.c \ $(OBJDIR)/user_.c \ $(OBJDIR)/utf8_.c \ $(OBJDIR)/util_.c \ $(OBJDIR)/verify_.c \ $(OBJDIR)/vfile_.c \ |
︙ | ︙ | |||
722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 | $(OBJDIR)/content.o \ $(OBJDIR)/db.o \ $(OBJDIR)/delta.o \ $(OBJDIR)/deltacmd.o \ $(OBJDIR)/descendants.o \ $(OBJDIR)/diff.o \ $(OBJDIR)/diffcmd.o \ $(OBJDIR)/doc.o \ $(OBJDIR)/encode.o \ $(OBJDIR)/event.o \ $(OBJDIR)/export.o \ $(OBJDIR)/file.o \ $(OBJDIR)/finfo.o \ $(OBJDIR)/foci.o \ $(OBJDIR)/fusefs.o \ $(OBJDIR)/glob.o \ $(OBJDIR)/graph.o \ $(OBJDIR)/gzip.o \ $(OBJDIR)/http.o \ $(OBJDIR)/http_socket.o \ $(OBJDIR)/http_ssl.o \ | > > | 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 | $(OBJDIR)/content.o \ $(OBJDIR)/db.o \ $(OBJDIR)/delta.o \ $(OBJDIR)/deltacmd.o \ $(OBJDIR)/descendants.o \ $(OBJDIR)/diff.o \ $(OBJDIR)/diffcmd.o \ $(OBJDIR)/dispatch.o \ $(OBJDIR)/doc.o \ $(OBJDIR)/encode.o \ $(OBJDIR)/event.o \ $(OBJDIR)/export.o \ $(OBJDIR)/file.o \ $(OBJDIR)/finfo.o \ $(OBJDIR)/foci.o \ $(OBJDIR)/fshell.o \ $(OBJDIR)/fusefs.o \ $(OBJDIR)/glob.o \ $(OBJDIR)/graph.o \ $(OBJDIR)/gzip.o \ $(OBJDIR)/http.o \ $(OBJDIR)/http_socket.o \ $(OBJDIR)/http_ssl.o \ |
︙ | ︙ | |||
800 801 802 803 804 805 806 807 808 809 810 811 812 813 | $(OBJDIR)/tar.o \ $(OBJDIR)/th_main.o \ $(OBJDIR)/timeline.o \ $(OBJDIR)/tkt.o \ $(OBJDIR)/tktsetup.o \ $(OBJDIR)/undo.o \ $(OBJDIR)/unicode.o \ $(OBJDIR)/update.o \ $(OBJDIR)/url.o \ $(OBJDIR)/user.o \ $(OBJDIR)/utf8.o \ $(OBJDIR)/util.o \ $(OBJDIR)/verify.o \ $(OBJDIR)/vfile.o \ | > | 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 | $(OBJDIR)/tar.o \ $(OBJDIR)/th_main.o \ $(OBJDIR)/timeline.o \ $(OBJDIR)/tkt.o \ $(OBJDIR)/tktsetup.o \ $(OBJDIR)/undo.o \ $(OBJDIR)/unicode.o \ $(OBJDIR)/unversioned.o \ $(OBJDIR)/update.o \ $(OBJDIR)/url.o \ $(OBJDIR)/user.o \ $(OBJDIR)/utf8.o \ $(OBJDIR)/util.o \ $(OBJDIR)/verify.o \ $(OBJDIR)/vfile.o \ |
︙ | ︙ | |||
919 920 921 922 923 924 925 | $(OBJDIR)/VERSION.h: $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest $(MKVERSION) $(MKVERSION) $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest $(SRCDIR)/../VERSION >$@ # The USE_SYSTEM_SQLITE variable may be undefined, set to 0, or set # to 1. If it is set to 1, then there is no need to build or link # the sqlite3.o object. Instead, the system SQLite will be linked # using -lsqlite3. | | | < > > > > > | | < < < < < < > > > > > | | > > | | | | | | | 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 | $(OBJDIR)/VERSION.h: $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest $(MKVERSION) $(MKVERSION) $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest $(SRCDIR)/../VERSION >$@ # The USE_SYSTEM_SQLITE variable may be undefined, set to 0, or set # to 1. If it is set to 1, then there is no need to build or link # the sqlite3.o object. Instead, the system SQLite will be linked # using -lsqlite3. SQLITE3_OBJ.0 = $(OBJDIR)/sqlite3.o SQLITE3_OBJ.1 = SQLITE3_OBJ. = $(SQLITE3_OBJ.0) # The FOSSIL_ENABLE_MINIZ variable may be undefined, set to 0, or # set to 1. If it is set to 1, the miniz library included in the # source tree should be used; otherwise, it should not. MINIZ_OBJ.0 = MINIZ_OBJ.1 = $(OBJDIR)/miniz.o MINIZ_OBJ. = $(MINIZ_OBJ.0) # The USE_SEE variable may be undefined, 0 or 1. If undefined or # 0, ordinary SQLite is used. If 1, then sqlite3-see.c (not part of # the source tree) is used and extra flags are provided to enable # the SQLite Encryption Extension. SQLITE3_SRC.0 = sqlite3.c SQLITE3_SRC.1 = sqlite3-see.c SQLITE3_SRC. = sqlite3.c SQLITE3_SRC = $(SRCDIR)/$(SQLITE3_SRC.$(USE_SEE)) SQLITE3_SHELL_SRC.0 = shell.c SQLITE3_SHELL_SRC.1 = shell-see.c SQLITE3_SHELL_SRC. = shell.c SQLITE3_SHELL_SRC = $(SRCDIR)/$(SQLITE3_SHELL_SRC.$(USE_SEE)) SEE_FLAGS.0 = SEE_FLAGS.1 = -DSQLITE_HAS_CODEC SEE_FLAGS. = SEE_FLAGS = $(SEE_FLAGS.$(USE_SEE)) EXTRAOBJ = \ $(SQLITE3_OBJ.$(USE_SYSTEM_SQLITE)) \ $(MINIZ_OBJ.$(FOSSIL_ENABLE_MINIZ)) \ $(OBJDIR)/shell.o \ $(OBJDIR)/th.o \ $(OBJDIR)/th_lang.o \ $(OBJDIR)/th_tcl.o \ $(OBJDIR)/cson_amalgamation.o $(ZLIBDIR)/inffas86.o: $(TCC) -c -o $@ -DASMINF -I$(ZLIBDIR) -O3 $(ZLIBDIR)/contrib/inflate86/inffas86.c $(ZLIBDIR)/match.o: $(TCC) -c -o $@ -DASMV $(ZLIBDIR)/contrib/asm686/match.S zlib: $(ZLIBTARGETS) $(MAKE) -C $(ZLIBDIR) PREFIX=$(PREFIX) CC=$(PREFIX)$(TCCEXE) $(ZLIBCONFIG) -f win32/Makefile.gcc libz.a clean-zlib: $(MAKE) -C $(ZLIBDIR) PREFIX=$(PREFIX) CC=$(PREFIX)$(TCCEXE) -f win32/Makefile.gcc clean ifdef FOSSIL_ENABLE_MINIZ BLDTARGETS = else BLDTARGETS = zlib endif openssl: $(BLDTARGETS) cd $(OPENSSLLIBDIR);./Configure --cross-compile-prefix=$(PREFIX) $(SSLCONFIG) $(MAKE) -C $(OPENSSLLIBDIR) PREFIX=$(PREFIX) CC=$(PREFIX)$(TCCEXE) build_libs clean-openssl: $(MAKE) -C $(OPENSSLLIBDIR) PREFIX=$(PREFIX) CC=$(PREFIX)$(TCCEXE) clean tcl: cd $(TCLSRCDIR)/win;./configure $(MAKE) -C $(TCLSRCDIR)/win PREFIX=$(PREFIX) CC=$(PREFIX)$(TCCEXE) $(TCLTARGET) clean-tcl: $(MAKE) -C $(TCLSRCDIR)/win PREFIX=$(PREFIX) CC=$(PREFIX)$(TCCEXE) distclean APPTARGETS += $(BLDTARGETS) ifdef FOSSIL_BUILD_SSL APPTARGETS += openssl endif $(APPNAME): $(APPTARGETS) $(OBJDIR)/headers $(CODECHECK1) $(OBJ) $(EXTRAOBJ) $(OBJDIR)/fossil.o $(CODECHECK1) $(TRANS_SRC) |
︙ | ︙ | |||
1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 | $(OBJDIR)/content_.c:$(OBJDIR)/content.h \ $(OBJDIR)/db_.c:$(OBJDIR)/db.h \ $(OBJDIR)/delta_.c:$(OBJDIR)/delta.h \ $(OBJDIR)/deltacmd_.c:$(OBJDIR)/deltacmd.h \ $(OBJDIR)/descendants_.c:$(OBJDIR)/descendants.h \ $(OBJDIR)/diff_.c:$(OBJDIR)/diff.h \ $(OBJDIR)/diffcmd_.c:$(OBJDIR)/diffcmd.h \ $(OBJDIR)/doc_.c:$(OBJDIR)/doc.h \ $(OBJDIR)/encode_.c:$(OBJDIR)/encode.h \ $(OBJDIR)/event_.c:$(OBJDIR)/event.h \ $(OBJDIR)/export_.c:$(OBJDIR)/export.h \ $(OBJDIR)/file_.c:$(OBJDIR)/file.h \ $(OBJDIR)/finfo_.c:$(OBJDIR)/finfo.h \ $(OBJDIR)/foci_.c:$(OBJDIR)/foci.h \ $(OBJDIR)/fusefs_.c:$(OBJDIR)/fusefs.h \ $(OBJDIR)/glob_.c:$(OBJDIR)/glob.h \ $(OBJDIR)/graph_.c:$(OBJDIR)/graph.h \ $(OBJDIR)/gzip_.c:$(OBJDIR)/gzip.h \ $(OBJDIR)/http_.c:$(OBJDIR)/http.h \ $(OBJDIR)/http_socket_.c:$(OBJDIR)/http_socket.h \ $(OBJDIR)/http_ssl_.c:$(OBJDIR)/http_ssl.h \ | > > | 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 | $(OBJDIR)/content_.c:$(OBJDIR)/content.h \ $(OBJDIR)/db_.c:$(OBJDIR)/db.h \ $(OBJDIR)/delta_.c:$(OBJDIR)/delta.h \ $(OBJDIR)/deltacmd_.c:$(OBJDIR)/deltacmd.h \ $(OBJDIR)/descendants_.c:$(OBJDIR)/descendants.h \ $(OBJDIR)/diff_.c:$(OBJDIR)/diff.h \ $(OBJDIR)/diffcmd_.c:$(OBJDIR)/diffcmd.h \ $(OBJDIR)/dispatch_.c:$(OBJDIR)/dispatch.h \ $(OBJDIR)/doc_.c:$(OBJDIR)/doc.h \ $(OBJDIR)/encode_.c:$(OBJDIR)/encode.h \ $(OBJDIR)/event_.c:$(OBJDIR)/event.h \ $(OBJDIR)/export_.c:$(OBJDIR)/export.h \ $(OBJDIR)/file_.c:$(OBJDIR)/file.h \ $(OBJDIR)/finfo_.c:$(OBJDIR)/finfo.h \ $(OBJDIR)/foci_.c:$(OBJDIR)/foci.h \ $(OBJDIR)/fshell_.c:$(OBJDIR)/fshell.h \ $(OBJDIR)/fusefs_.c:$(OBJDIR)/fusefs.h \ $(OBJDIR)/glob_.c:$(OBJDIR)/glob.h \ $(OBJDIR)/graph_.c:$(OBJDIR)/graph.h \ $(OBJDIR)/gzip_.c:$(OBJDIR)/gzip.h \ $(OBJDIR)/http_.c:$(OBJDIR)/http.h \ $(OBJDIR)/http_socket_.c:$(OBJDIR)/http_socket.h \ $(OBJDIR)/http_ssl_.c:$(OBJDIR)/http_ssl.h \ |
︙ | ︙ | |||
1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 | $(OBJDIR)/tar_.c:$(OBJDIR)/tar.h \ $(OBJDIR)/th_main_.c:$(OBJDIR)/th_main.h \ $(OBJDIR)/timeline_.c:$(OBJDIR)/timeline.h \ $(OBJDIR)/tkt_.c:$(OBJDIR)/tkt.h \ $(OBJDIR)/tktsetup_.c:$(OBJDIR)/tktsetup.h \ $(OBJDIR)/undo_.c:$(OBJDIR)/undo.h \ $(OBJDIR)/unicode_.c:$(OBJDIR)/unicode.h \ $(OBJDIR)/update_.c:$(OBJDIR)/update.h \ $(OBJDIR)/url_.c:$(OBJDIR)/url.h \ $(OBJDIR)/user_.c:$(OBJDIR)/user.h \ $(OBJDIR)/utf8_.c:$(OBJDIR)/utf8.h \ $(OBJDIR)/util_.c:$(OBJDIR)/util.h \ $(OBJDIR)/verify_.c:$(OBJDIR)/verify.h \ $(OBJDIR)/vfile_.c:$(OBJDIR)/vfile.h \ | > | 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 | $(OBJDIR)/tar_.c:$(OBJDIR)/tar.h \ $(OBJDIR)/th_main_.c:$(OBJDIR)/th_main.h \ $(OBJDIR)/timeline_.c:$(OBJDIR)/timeline.h \ $(OBJDIR)/tkt_.c:$(OBJDIR)/tkt.h \ $(OBJDIR)/tktsetup_.c:$(OBJDIR)/tktsetup.h \ $(OBJDIR)/undo_.c:$(OBJDIR)/undo.h \ $(OBJDIR)/unicode_.c:$(OBJDIR)/unicode.h \ $(OBJDIR)/unversioned_.c:$(OBJDIR)/unversioned.h \ $(OBJDIR)/update_.c:$(OBJDIR)/update.h \ $(OBJDIR)/url_.c:$(OBJDIR)/url.h \ $(OBJDIR)/user_.c:$(OBJDIR)/user.h \ $(OBJDIR)/utf8_.c:$(OBJDIR)/utf8.h \ $(OBJDIR)/util_.c:$(OBJDIR)/util.h \ $(OBJDIR)/verify_.c:$(OBJDIR)/verify.h \ $(OBJDIR)/vfile_.c:$(OBJDIR)/vfile.h \ |
︙ | ︙ | |||
1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 | $(OBJDIR)/diffcmd_.c: $(SRCDIR)/diffcmd.c $(TRANSLATE) $(TRANSLATE) $(SRCDIR)/diffcmd.c >$@ $(OBJDIR)/diffcmd.o: $(OBJDIR)/diffcmd_.c $(OBJDIR)/diffcmd.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/diffcmd.o -c $(OBJDIR)/diffcmd_.c $(OBJDIR)/diffcmd.h: $(OBJDIR)/headers $(OBJDIR)/doc_.c: $(SRCDIR)/doc.c $(TRANSLATE) $(TRANSLATE) $(SRCDIR)/doc.c >$@ $(OBJDIR)/doc.o: $(OBJDIR)/doc_.c $(OBJDIR)/doc.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/doc.o -c $(OBJDIR)/doc_.c | > > > > > > > > | 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 | $(OBJDIR)/diffcmd_.c: $(SRCDIR)/diffcmd.c $(TRANSLATE) $(TRANSLATE) $(SRCDIR)/diffcmd.c >$@ $(OBJDIR)/diffcmd.o: $(OBJDIR)/diffcmd_.c $(OBJDIR)/diffcmd.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/diffcmd.o -c $(OBJDIR)/diffcmd_.c $(OBJDIR)/diffcmd.h: $(OBJDIR)/headers $(OBJDIR)/dispatch_.c: $(SRCDIR)/dispatch.c $(TRANSLATE) $(TRANSLATE) $(SRCDIR)/dispatch.c >$@ $(OBJDIR)/dispatch.o: $(OBJDIR)/dispatch_.c $(OBJDIR)/dispatch.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/dispatch.o -c $(OBJDIR)/dispatch_.c $(OBJDIR)/dispatch.h: $(OBJDIR)/headers $(OBJDIR)/doc_.c: $(SRCDIR)/doc.c $(TRANSLATE) $(TRANSLATE) $(SRCDIR)/doc.c >$@ $(OBJDIR)/doc.o: $(OBJDIR)/doc_.c $(OBJDIR)/doc.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/doc.o -c $(OBJDIR)/doc_.c |
︙ | ︙ | |||
1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 | $(OBJDIR)/foci_.c: $(SRCDIR)/foci.c $(TRANSLATE) $(TRANSLATE) $(SRCDIR)/foci.c >$@ $(OBJDIR)/foci.o: $(OBJDIR)/foci_.c $(OBJDIR)/foci.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/foci.o -c $(OBJDIR)/foci_.c $(OBJDIR)/foci.h: $(OBJDIR)/headers $(OBJDIR)/fusefs_.c: $(SRCDIR)/fusefs.c $(TRANSLATE) $(TRANSLATE) $(SRCDIR)/fusefs.c >$@ $(OBJDIR)/fusefs.o: $(OBJDIR)/fusefs_.c $(OBJDIR)/fusefs.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/fusefs.o -c $(OBJDIR)/fusefs_.c | > > > > > > > > | 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 | $(OBJDIR)/foci_.c: $(SRCDIR)/foci.c $(TRANSLATE) $(TRANSLATE) $(SRCDIR)/foci.c >$@ $(OBJDIR)/foci.o: $(OBJDIR)/foci_.c $(OBJDIR)/foci.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/foci.o -c $(OBJDIR)/foci_.c $(OBJDIR)/foci.h: $(OBJDIR)/headers $(OBJDIR)/fshell_.c: $(SRCDIR)/fshell.c $(TRANSLATE) $(TRANSLATE) $(SRCDIR)/fshell.c >$@ $(OBJDIR)/fshell.o: $(OBJDIR)/fshell_.c $(OBJDIR)/fshell.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/fshell.o -c $(OBJDIR)/fshell_.c $(OBJDIR)/fshell.h: $(OBJDIR)/headers $(OBJDIR)/fusefs_.c: $(SRCDIR)/fusefs.c $(TRANSLATE) $(TRANSLATE) $(SRCDIR)/fusefs.c >$@ $(OBJDIR)/fusefs.o: $(OBJDIR)/fusefs_.c $(OBJDIR)/fusefs.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/fusefs.o -c $(OBJDIR)/fusefs_.c |
︙ | ︙ | |||
1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 | $(OBJDIR)/unicode_.c: $(SRCDIR)/unicode.c $(TRANSLATE) $(TRANSLATE) $(SRCDIR)/unicode.c >$@ $(OBJDIR)/unicode.o: $(OBJDIR)/unicode_.c $(OBJDIR)/unicode.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/unicode.o -c $(OBJDIR)/unicode_.c $(OBJDIR)/unicode.h: $(OBJDIR)/headers $(OBJDIR)/update_.c: $(SRCDIR)/update.c $(TRANSLATE) $(TRANSLATE) $(SRCDIR)/update.c >$@ $(OBJDIR)/update.o: $(OBJDIR)/update_.c $(OBJDIR)/update.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/update.o -c $(OBJDIR)/update_.c | > > > > > > > > | 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 | $(OBJDIR)/unicode_.c: $(SRCDIR)/unicode.c $(TRANSLATE) $(TRANSLATE) $(SRCDIR)/unicode.c >$@ $(OBJDIR)/unicode.o: $(OBJDIR)/unicode_.c $(OBJDIR)/unicode.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/unicode.o -c $(OBJDIR)/unicode_.c $(OBJDIR)/unicode.h: $(OBJDIR)/headers $(OBJDIR)/unversioned_.c: $(SRCDIR)/unversioned.c $(TRANSLATE) $(TRANSLATE) $(SRCDIR)/unversioned.c >$@ $(OBJDIR)/unversioned.o: $(OBJDIR)/unversioned_.c $(OBJDIR)/unversioned.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/unversioned.o -c $(OBJDIR)/unversioned_.c $(OBJDIR)/unversioned.h: $(OBJDIR)/headers $(OBJDIR)/update_.c: $(SRCDIR)/update.c $(TRANSLATE) $(TRANSLATE) $(SRCDIR)/update.c >$@ $(OBJDIR)/update.o: $(OBJDIR)/update_.c $(OBJDIR)/update.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/update.o -c $(OBJDIR)/update_.c |
︙ | ︙ | |||
2103 2104 2105 2106 2107 2108 2109 2110 | $(TRANSLATE) $(SRCDIR)/zip.c >$@ $(OBJDIR)/zip.o: $(OBJDIR)/zip_.c $(OBJDIR)/zip.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/zip.o -c $(OBJDIR)/zip_.c $(OBJDIR)/zip.h: $(OBJDIR)/headers SQLITE_OPTIONS = -DNDEBUG=1 \ | > > > > > > > > > > | > > < < | | 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 | $(TRANSLATE) $(SRCDIR)/zip.c >$@ $(OBJDIR)/zip.o: $(OBJDIR)/zip_.c $(OBJDIR)/zip.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/zip.o -c $(OBJDIR)/zip_.c $(OBJDIR)/zip.h: $(OBJDIR)/headers MINGW_OPTIONS = -D_HAVE__MINGW_H SQLITE_OPTIONS = -DNDEBUG=1 \ -DSQLITE_THREADSAFE=0 \ -DSQLITE_DEFAULT_MEMSTATUS=0 \ -DSQLITE_DEFAULT_WAL_SYNCHRONOUS=1 \ -DSQLITE_LIKE_DOESNT_MATCH_BLOBS \ -DSQLITE_OMIT_DECLTYPE \ -DSQLITE_OMIT_DEPRECATED \ -DSQLITE_OMIT_PROGRESS_CALLBACK \ -DSQLITE_OMIT_SHARED_CACHE \ -DSQLITE_OMIT_LOAD_EXTENSION \ -DSQLITE_MAX_EXPR_DEPTH=0 \ -DSQLITE_USE_ALLOCA \ -DSQLITE_ENABLE_LOCKING_STYLE=0 \ -DSQLITE_DEFAULT_FILE_FORMAT=4 \ -DSQLITE_ENABLE_EXPLAIN_COMMENTS \ -DSQLITE_ENABLE_FTS4 \ -DSQLITE_ENABLE_FTS3_PARENTHESIS \ -DSQLITE_ENABLE_DBSTAT_VTAB \ -DSQLITE_ENABLE_JSON1 \ -DSQLITE_ENABLE_FTS5 \ -DSQLITE_WIN32_NO_ANSI \ $(MINGW_OPTIONS) \ -DSQLITE_USE_MALLOC_H \ -DSQLITE_USE_MSIZE SHELL_OPTIONS = -Dmain=sqlite3_shell \ -DSQLITE_SHELL_IS_UTF8=1 \ -DSQLITE_OMIT_LOAD_EXTENSION=1 \ -DUSE_SYSTEM_SQLITE=$(USE_SYSTEM_SQLITE) \ |
︙ | ︙ | |||
2143 2144 2145 2146 2147 2148 2149 | -c $(SQLITE3_SRC) -o $@ $(OBJDIR)/cson_amalgamation.o: $(SRCDIR)/cson_amalgamation.c $(XTCC) -c $(SRCDIR)/cson_amalgamation.c -o $@ $(OBJDIR)/json.o $(OBJDIR)/json_artifact.o $(OBJDIR)/json_branch.o $(OBJDIR)/json_config.o $(OBJDIR)/json_diff.o $(OBJDIR)/json_dir.o $(OBJDIR)/jsos_finfo.o $(OBJDIR)/json_login.o $(OBJDIR)/json_query.o $(OBJDIR)/json_report.o $(OBJDIR)/json_status.o $(OBJDIR)/json_tag.o $(OBJDIR)/json_timeline.o $(OBJDIR)/json_user.o $(OBJDIR)/json_wiki.o : $(SRCDIR)/json_detail.h | | | | 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 | -c $(SQLITE3_SRC) -o $@ $(OBJDIR)/cson_amalgamation.o: $(SRCDIR)/cson_amalgamation.c $(XTCC) -c $(SRCDIR)/cson_amalgamation.c -o $@ $(OBJDIR)/json.o $(OBJDIR)/json_artifact.o $(OBJDIR)/json_branch.o $(OBJDIR)/json_config.o $(OBJDIR)/json_diff.o $(OBJDIR)/json_dir.o $(OBJDIR)/jsos_finfo.o $(OBJDIR)/json_login.o $(OBJDIR)/json_query.o $(OBJDIR)/json_report.o $(OBJDIR)/json_status.o $(OBJDIR)/json_tag.o $(OBJDIR)/json_timeline.o $(OBJDIR)/json_user.o $(OBJDIR)/json_wiki.o : $(SRCDIR)/json_detail.h $(OBJDIR)/shell.o: $(SQLITE3_SHELL_SRC) $(SRCDIR)/sqlite3.h $(SRCDIR)/../win/Makefile.mingw $(XTCC) $(SHELL_OPTIONS) $(SHELL_CFLAGS) -c $(SQLITE3_SHELL_SRC) -o $@ $(OBJDIR)/th.o: $(SRCDIR)/th.c $(XTCC) -c $(SRCDIR)/th.c -o $@ $(OBJDIR)/th_lang.o: $(SRCDIR)/th_lang.c $(XTCC) -c $(SRCDIR)/th_lang.c -o $@ $(OBJDIR)/th_tcl.o: $(SRCDIR)/th_tcl.c $(XTCC) -c $(SRCDIR)/th_tcl.c -o $@ $(OBJDIR)/miniz.o: $(SRCDIR)/miniz.c $(XTCC) $(MINIZ_OPTIONS) -c $(SRCDIR)/miniz.c -o $@ |
Changes to win/Makefile.mingw.mistachkin.
︙ | ︙ | |||
30 31 32 33 34 35 36 37 38 39 40 41 42 | # the following to point from the build directory to the src/ folder. # SRCDIR = src #### The directory into which object code files should be written. # OBJDIR = wbld #### C Compiler and options for use in building executables that # will run on the platform that is doing the build. This is used # to compile code-generator programs as part of the build process. # See TCC below for the C compiler for building the finished binary. # | > > > > > > > > | | 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 | # the following to point from the build directory to the src/ folder. # SRCDIR = src #### The directory into which object code files should be written. # OBJDIR = wbld #### C compiler for use in building executables that will run on # the platform that is doing the build. This is used to compile # code-generator programs as part of the build process. See TCC # and TCCEXE below for the C compiler for building the finished # binary. # BCCEXE = gcc #### C Compiler and options for use in building executables that # will run on the platform that is doing the build. This is used # to compile code-generator programs as part of the build process. # See TCC below for the C compiler for building the finished binary. # BCC = $(BCCEXE) #### Enable compiling with debug symbols (much larger binary) # # FOSSIL_ENABLE_SYMBOLS = 1 #### Enable JSON (http://www.json.org) support using "cson" # |
︙ | ︙ | |||
132 133 134 135 136 137 138 | # used, taking into account whether zlib is actually enabled and the target # processor architecture. # ifndef X64 SSLCONFIG = mingw ifndef FOSSIL_ENABLE_MINIZ ZLIBCONFIG = LOC="-DASMV -DASMINF" OBJA="inffas86.o match.o" | | | | | | 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 | # used, taking into account whether zlib is actually enabled and the target # processor architecture. # ifndef X64 SSLCONFIG = mingw ifndef FOSSIL_ENABLE_MINIZ ZLIBCONFIG = LOC="-DASMV -DASMINF" OBJA="inffas86.o match.o" ZLIBTARGETS = $(ZLIBDIR)/inffas86.o $(ZLIBDIR)/match.o else ZLIBCONFIG = ZLIBTARGETS = endif else SSLCONFIG = mingw64 ZLIBCONFIG = ZLIBTARGETS = endif #### Disable creation of the OpenSSL shared libraries. Also, disable support # for both SSLv2 and SSLv3 (i.e. thereby forcing the use of TLS). # SSLCONFIG += no-ssl2 no-ssl3 no-shared #### When using zlib, make sure that OpenSSL is configured to use the zlib # that Fossil knows about (i.e. the one within the source tree). # ifndef FOSSIL_ENABLE_MINIZ SSLCONFIG += --with-zlib-lib=$(PWD)/$(ZLIBDIR) --with-zlib-include=$(PWD)/$(ZLIBDIR) zlib endif #### The directories where the OpenSSL include and library files are located. # The recommended usage here is to use the Sysinternals junction tool # to create a hard link between an "openssl-1.x" sub-directory of the # Fossil source code directory and the target OpenSSL source directory. # OPENSSLDIR = $(SRCDIR)/../compat/openssl-1.0.2j OPENSSLINCDIR = $(OPENSSLDIR)/include OPENSSLLIBDIR = $(OPENSSLDIR) #### Either the directory where the Tcl library is installed or the Tcl # source code directory resides (depending on the value of the macro # FOSSIL_TCL_SOURCE). If this points to the Tcl install directory, # this directory must have "include" and "lib" sub-directories. If |
︙ | ︙ | |||
201 202 203 204 205 206 207 | endif TCLTARGET = libtclstub86.a else LIBTCL = -ltcl86 TCLTARGET = binaries endif | > > > > > > > > | | | | 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 | endif TCLTARGET = libtclstub86.a else LIBTCL = -ltcl86 TCLTARGET = binaries endif #### C compiler for use in building executables that will run on the # target platform. This is usually the same as BCCEXE, unless you # are cross-compiling. This C compiler builds the finished binary # for fossil. See BCC and BCCEXE above for the C compiler for # building intermediate code-generator tools. # TCCEXE = gcc #### C compiler and options for use in building executables that will # run on the target platform. This is usually the almost the same # as BCC, unless you are cross-compiling. This C compiler builds # the finished binary for fossil. The BCC compiler above is used # for building intermediate code-generator tools. # TCC = $(PREFIX)$(TCCEXE) -Wall #### Add the necessary command line options to build with debugging # symbols, if enabled. # ifdef FOSSIL_ENABLE_SYMBOLS TCC += -g else |
︙ | ︙ | |||
346 347 348 349 350 351 352 | ifdef USE_SYSTEM_SQLITE LIB += -lsqlite3 endif #### OpenSSL: Add the necessary libraries required, if enabled. # ifdef FOSSIL_ENABLE_SSL | | | 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 | ifdef USE_SYSTEM_SQLITE LIB += -lsqlite3 endif #### OpenSSL: Add the necessary libraries required, if enabled. # ifdef FOSSIL_ENABLE_SSL LIB += -lssl -lcrypto -lgdi32 -lcrypt32 endif #### Tcl: Add the necessary libraries required, if enabled. # ifdef FOSSIL_ENABLE_TCL LIB += $(LIBTCL) endif |
︙ | ︙ | |||
429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 | $(SRCDIR)/content.c \ $(SRCDIR)/db.c \ $(SRCDIR)/delta.c \ $(SRCDIR)/deltacmd.c \ $(SRCDIR)/descendants.c \ $(SRCDIR)/diff.c \ $(SRCDIR)/diffcmd.c \ $(SRCDIR)/doc.c \ $(SRCDIR)/encode.c \ $(SRCDIR)/event.c \ $(SRCDIR)/export.c \ $(SRCDIR)/file.c \ $(SRCDIR)/finfo.c \ $(SRCDIR)/foci.c \ $(SRCDIR)/fusefs.c \ $(SRCDIR)/glob.c \ $(SRCDIR)/graph.c \ $(SRCDIR)/gzip.c \ $(SRCDIR)/http.c \ $(SRCDIR)/http_socket.c \ $(SRCDIR)/http_ssl.c \ | > > | 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 | $(SRCDIR)/content.c \ $(SRCDIR)/db.c \ $(SRCDIR)/delta.c \ $(SRCDIR)/deltacmd.c \ $(SRCDIR)/descendants.c \ $(SRCDIR)/diff.c \ $(SRCDIR)/diffcmd.c \ $(SRCDIR)/dispatch.c \ $(SRCDIR)/doc.c \ $(SRCDIR)/encode.c \ $(SRCDIR)/event.c \ $(SRCDIR)/export.c \ $(SRCDIR)/file.c \ $(SRCDIR)/finfo.c \ $(SRCDIR)/foci.c \ $(SRCDIR)/fshell.c \ $(SRCDIR)/fusefs.c \ $(SRCDIR)/glob.c \ $(SRCDIR)/graph.c \ $(SRCDIR)/gzip.c \ $(SRCDIR)/http.c \ $(SRCDIR)/http_socket.c \ $(SRCDIR)/http_ssl.c \ |
︙ | ︙ | |||
507 508 509 510 511 512 513 514 515 516 517 518 519 520 | $(SRCDIR)/tar.c \ $(SRCDIR)/th_main.c \ $(SRCDIR)/timeline.c \ $(SRCDIR)/tkt.c \ $(SRCDIR)/tktsetup.c \ $(SRCDIR)/undo.c \ $(SRCDIR)/unicode.c \ $(SRCDIR)/update.c \ $(SRCDIR)/url.c \ $(SRCDIR)/user.c \ $(SRCDIR)/utf8.c \ $(SRCDIR)/util.c \ $(SRCDIR)/verify.c \ $(SRCDIR)/vfile.c \ | > | 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 | $(SRCDIR)/tar.c \ $(SRCDIR)/th_main.c \ $(SRCDIR)/timeline.c \ $(SRCDIR)/tkt.c \ $(SRCDIR)/tktsetup.c \ $(SRCDIR)/undo.c \ $(SRCDIR)/unicode.c \ $(SRCDIR)/unversioned.c \ $(SRCDIR)/update.c \ $(SRCDIR)/url.c \ $(SRCDIR)/user.c \ $(SRCDIR)/utf8.c \ $(SRCDIR)/util.c \ $(SRCDIR)/verify.c \ $(SRCDIR)/vfile.c \ |
︙ | ︙ | |||
601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 | $(OBJDIR)/content_.c \ $(OBJDIR)/db_.c \ $(OBJDIR)/delta_.c \ $(OBJDIR)/deltacmd_.c \ $(OBJDIR)/descendants_.c \ $(OBJDIR)/diff_.c \ $(OBJDIR)/diffcmd_.c \ $(OBJDIR)/doc_.c \ $(OBJDIR)/encode_.c \ $(OBJDIR)/event_.c \ $(OBJDIR)/export_.c \ $(OBJDIR)/file_.c \ $(OBJDIR)/finfo_.c \ $(OBJDIR)/foci_.c \ $(OBJDIR)/fusefs_.c \ $(OBJDIR)/glob_.c \ $(OBJDIR)/graph_.c \ $(OBJDIR)/gzip_.c \ $(OBJDIR)/http_.c \ $(OBJDIR)/http_socket_.c \ $(OBJDIR)/http_ssl_.c \ | > > | 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 | $(OBJDIR)/content_.c \ $(OBJDIR)/db_.c \ $(OBJDIR)/delta_.c \ $(OBJDIR)/deltacmd_.c \ $(OBJDIR)/descendants_.c \ $(OBJDIR)/diff_.c \ $(OBJDIR)/diffcmd_.c \ $(OBJDIR)/dispatch_.c \ $(OBJDIR)/doc_.c \ $(OBJDIR)/encode_.c \ $(OBJDIR)/event_.c \ $(OBJDIR)/export_.c \ $(OBJDIR)/file_.c \ $(OBJDIR)/finfo_.c \ $(OBJDIR)/foci_.c \ $(OBJDIR)/fshell_.c \ $(OBJDIR)/fusefs_.c \ $(OBJDIR)/glob_.c \ $(OBJDIR)/graph_.c \ $(OBJDIR)/gzip_.c \ $(OBJDIR)/http_.c \ $(OBJDIR)/http_socket_.c \ $(OBJDIR)/http_ssl_.c \ |
︙ | ︙ | |||
679 680 681 682 683 684 685 686 687 688 689 690 691 692 | $(OBJDIR)/tar_.c \ $(OBJDIR)/th_main_.c \ $(OBJDIR)/timeline_.c \ $(OBJDIR)/tkt_.c \ $(OBJDIR)/tktsetup_.c \ $(OBJDIR)/undo_.c \ $(OBJDIR)/unicode_.c \ $(OBJDIR)/update_.c \ $(OBJDIR)/url_.c \ $(OBJDIR)/user_.c \ $(OBJDIR)/utf8_.c \ $(OBJDIR)/util_.c \ $(OBJDIR)/verify_.c \ $(OBJDIR)/vfile_.c \ | > | 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 | $(OBJDIR)/tar_.c \ $(OBJDIR)/th_main_.c \ $(OBJDIR)/timeline_.c \ $(OBJDIR)/tkt_.c \ $(OBJDIR)/tktsetup_.c \ $(OBJDIR)/undo_.c \ $(OBJDIR)/unicode_.c \ $(OBJDIR)/unversioned_.c \ $(OBJDIR)/update_.c \ $(OBJDIR)/url_.c \ $(OBJDIR)/user_.c \ $(OBJDIR)/utf8_.c \ $(OBJDIR)/util_.c \ $(OBJDIR)/verify_.c \ $(OBJDIR)/vfile_.c \ |
︙ | ︙ | |||
722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 | $(OBJDIR)/content.o \ $(OBJDIR)/db.o \ $(OBJDIR)/delta.o \ $(OBJDIR)/deltacmd.o \ $(OBJDIR)/descendants.o \ $(OBJDIR)/diff.o \ $(OBJDIR)/diffcmd.o \ $(OBJDIR)/doc.o \ $(OBJDIR)/encode.o \ $(OBJDIR)/event.o \ $(OBJDIR)/export.o \ $(OBJDIR)/file.o \ $(OBJDIR)/finfo.o \ $(OBJDIR)/foci.o \ $(OBJDIR)/fusefs.o \ $(OBJDIR)/glob.o \ $(OBJDIR)/graph.o \ $(OBJDIR)/gzip.o \ $(OBJDIR)/http.o \ $(OBJDIR)/http_socket.o \ $(OBJDIR)/http_ssl.o \ | > > | 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 | $(OBJDIR)/content.o \ $(OBJDIR)/db.o \ $(OBJDIR)/delta.o \ $(OBJDIR)/deltacmd.o \ $(OBJDIR)/descendants.o \ $(OBJDIR)/diff.o \ $(OBJDIR)/diffcmd.o \ $(OBJDIR)/dispatch.o \ $(OBJDIR)/doc.o \ $(OBJDIR)/encode.o \ $(OBJDIR)/event.o \ $(OBJDIR)/export.o \ $(OBJDIR)/file.o \ $(OBJDIR)/finfo.o \ $(OBJDIR)/foci.o \ $(OBJDIR)/fshell.o \ $(OBJDIR)/fusefs.o \ $(OBJDIR)/glob.o \ $(OBJDIR)/graph.o \ $(OBJDIR)/gzip.o \ $(OBJDIR)/http.o \ $(OBJDIR)/http_socket.o \ $(OBJDIR)/http_ssl.o \ |
︙ | ︙ | |||
800 801 802 803 804 805 806 807 808 809 810 811 812 813 | $(OBJDIR)/tar.o \ $(OBJDIR)/th_main.o \ $(OBJDIR)/timeline.o \ $(OBJDIR)/tkt.o \ $(OBJDIR)/tktsetup.o \ $(OBJDIR)/undo.o \ $(OBJDIR)/unicode.o \ $(OBJDIR)/update.o \ $(OBJDIR)/url.o \ $(OBJDIR)/user.o \ $(OBJDIR)/utf8.o \ $(OBJDIR)/util.o \ $(OBJDIR)/verify.o \ $(OBJDIR)/vfile.o \ | > | 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 | $(OBJDIR)/tar.o \ $(OBJDIR)/th_main.o \ $(OBJDIR)/timeline.o \ $(OBJDIR)/tkt.o \ $(OBJDIR)/tktsetup.o \ $(OBJDIR)/undo.o \ $(OBJDIR)/unicode.o \ $(OBJDIR)/unversioned.o \ $(OBJDIR)/update.o \ $(OBJDIR)/url.o \ $(OBJDIR)/user.o \ $(OBJDIR)/utf8.o \ $(OBJDIR)/util.o \ $(OBJDIR)/verify.o \ $(OBJDIR)/vfile.o \ |
︙ | ︙ | |||
919 920 921 922 923 924 925 | $(OBJDIR)/VERSION.h: $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest $(MKVERSION) $(MKVERSION) $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest $(SRCDIR)/../VERSION >$@ # The USE_SYSTEM_SQLITE variable may be undefined, set to 0, or set # to 1. If it is set to 1, then there is no need to build or link # the sqlite3.o object. Instead, the system SQLite will be linked # using -lsqlite3. | | | < > > > > > | | < < < < < < > > > > > | | > > | | | | | | | 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 | $(OBJDIR)/VERSION.h: $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest $(MKVERSION) $(MKVERSION) $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest $(SRCDIR)/../VERSION >$@ # The USE_SYSTEM_SQLITE variable may be undefined, set to 0, or set # to 1. If it is set to 1, then there is no need to build or link # the sqlite3.o object. Instead, the system SQLite will be linked # using -lsqlite3. SQLITE3_OBJ.0 = $(OBJDIR)/sqlite3.o SQLITE3_OBJ.1 = SQLITE3_OBJ. = $(SQLITE3_OBJ.0) # The FOSSIL_ENABLE_MINIZ variable may be undefined, set to 0, or # set to 1. If it is set to 1, the miniz library included in the # source tree should be used; otherwise, it should not. MINIZ_OBJ.0 = MINIZ_OBJ.1 = $(OBJDIR)/miniz.o MINIZ_OBJ. = $(MINIZ_OBJ.0) # The USE_SEE variable may be undefined, 0 or 1. If undefined or # 0, ordinary SQLite is used. If 1, then sqlite3-see.c (not part of # the source tree) is used and extra flags are provided to enable # the SQLite Encryption Extension. SQLITE3_SRC.0 = sqlite3.c SQLITE3_SRC.1 = sqlite3-see.c SQLITE3_SRC. = sqlite3.c SQLITE3_SRC = $(SRCDIR)/$(SQLITE3_SRC.$(USE_SEE)) SQLITE3_SHELL_SRC.0 = shell.c SQLITE3_SHELL_SRC.1 = shell-see.c SQLITE3_SHELL_SRC. = shell.c SQLITE3_SHELL_SRC = $(SRCDIR)/$(SQLITE3_SHELL_SRC.$(USE_SEE)) SEE_FLAGS.0 = SEE_FLAGS.1 = -DSQLITE_HAS_CODEC SEE_FLAGS. = SEE_FLAGS = $(SEE_FLAGS.$(USE_SEE)) EXTRAOBJ = \ $(SQLITE3_OBJ.$(USE_SYSTEM_SQLITE)) \ $(MINIZ_OBJ.$(FOSSIL_ENABLE_MINIZ)) \ $(OBJDIR)/shell.o \ $(OBJDIR)/th.o \ $(OBJDIR)/th_lang.o \ $(OBJDIR)/th_tcl.o \ $(OBJDIR)/cson_amalgamation.o $(ZLIBDIR)/inffas86.o: $(TCC) -c -o $@ -DASMINF -I$(ZLIBDIR) -O3 $(ZLIBDIR)/contrib/inflate86/inffas86.c $(ZLIBDIR)/match.o: $(TCC) -c -o $@ -DASMV $(ZLIBDIR)/contrib/asm686/match.S zlib: $(ZLIBTARGETS) $(MAKE) -C $(ZLIBDIR) PREFIX=$(PREFIX) CC=$(PREFIX)$(TCCEXE) $(ZLIBCONFIG) -f win32/Makefile.gcc libz.a clean-zlib: $(MAKE) -C $(ZLIBDIR) PREFIX=$(PREFIX) CC=$(PREFIX)$(TCCEXE) -f win32/Makefile.gcc clean ifdef FOSSIL_ENABLE_MINIZ BLDTARGETS = else BLDTARGETS = zlib endif openssl: $(BLDTARGETS) cd $(OPENSSLLIBDIR);./Configure --cross-compile-prefix=$(PREFIX) $(SSLCONFIG) $(MAKE) -C $(OPENSSLLIBDIR) PREFIX=$(PREFIX) CC=$(PREFIX)$(TCCEXE) build_libs clean-openssl: $(MAKE) -C $(OPENSSLLIBDIR) PREFIX=$(PREFIX) CC=$(PREFIX)$(TCCEXE) clean tcl: cd $(TCLSRCDIR)/win;./configure $(MAKE) -C $(TCLSRCDIR)/win PREFIX=$(PREFIX) CC=$(PREFIX)$(TCCEXE) $(TCLTARGET) clean-tcl: $(MAKE) -C $(TCLSRCDIR)/win PREFIX=$(PREFIX) CC=$(PREFIX)$(TCCEXE) distclean APPTARGETS += $(BLDTARGETS) ifdef FOSSIL_BUILD_SSL APPTARGETS += openssl endif $(APPNAME): $(APPTARGETS) $(OBJDIR)/headers $(CODECHECK1) $(OBJ) $(EXTRAOBJ) $(OBJDIR)/fossil.o $(CODECHECK1) $(TRANS_SRC) |
︙ | ︙ | |||
1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 | $(OBJDIR)/content_.c:$(OBJDIR)/content.h \ $(OBJDIR)/db_.c:$(OBJDIR)/db.h \ $(OBJDIR)/delta_.c:$(OBJDIR)/delta.h \ $(OBJDIR)/deltacmd_.c:$(OBJDIR)/deltacmd.h \ $(OBJDIR)/descendants_.c:$(OBJDIR)/descendants.h \ $(OBJDIR)/diff_.c:$(OBJDIR)/diff.h \ $(OBJDIR)/diffcmd_.c:$(OBJDIR)/diffcmd.h \ $(OBJDIR)/doc_.c:$(OBJDIR)/doc.h \ $(OBJDIR)/encode_.c:$(OBJDIR)/encode.h \ $(OBJDIR)/event_.c:$(OBJDIR)/event.h \ $(OBJDIR)/export_.c:$(OBJDIR)/export.h \ $(OBJDIR)/file_.c:$(OBJDIR)/file.h \ $(OBJDIR)/finfo_.c:$(OBJDIR)/finfo.h \ $(OBJDIR)/foci_.c:$(OBJDIR)/foci.h \ $(OBJDIR)/fusefs_.c:$(OBJDIR)/fusefs.h \ $(OBJDIR)/glob_.c:$(OBJDIR)/glob.h \ $(OBJDIR)/graph_.c:$(OBJDIR)/graph.h \ $(OBJDIR)/gzip_.c:$(OBJDIR)/gzip.h \ $(OBJDIR)/http_.c:$(OBJDIR)/http.h \ $(OBJDIR)/http_socket_.c:$(OBJDIR)/http_socket.h \ $(OBJDIR)/http_ssl_.c:$(OBJDIR)/http_ssl.h \ | > > | 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 | $(OBJDIR)/content_.c:$(OBJDIR)/content.h \ $(OBJDIR)/db_.c:$(OBJDIR)/db.h \ $(OBJDIR)/delta_.c:$(OBJDIR)/delta.h \ $(OBJDIR)/deltacmd_.c:$(OBJDIR)/deltacmd.h \ $(OBJDIR)/descendants_.c:$(OBJDIR)/descendants.h \ $(OBJDIR)/diff_.c:$(OBJDIR)/diff.h \ $(OBJDIR)/diffcmd_.c:$(OBJDIR)/diffcmd.h \ $(OBJDIR)/dispatch_.c:$(OBJDIR)/dispatch.h \ $(OBJDIR)/doc_.c:$(OBJDIR)/doc.h \ $(OBJDIR)/encode_.c:$(OBJDIR)/encode.h \ $(OBJDIR)/event_.c:$(OBJDIR)/event.h \ $(OBJDIR)/export_.c:$(OBJDIR)/export.h \ $(OBJDIR)/file_.c:$(OBJDIR)/file.h \ $(OBJDIR)/finfo_.c:$(OBJDIR)/finfo.h \ $(OBJDIR)/foci_.c:$(OBJDIR)/foci.h \ $(OBJDIR)/fshell_.c:$(OBJDIR)/fshell.h \ $(OBJDIR)/fusefs_.c:$(OBJDIR)/fusefs.h \ $(OBJDIR)/glob_.c:$(OBJDIR)/glob.h \ $(OBJDIR)/graph_.c:$(OBJDIR)/graph.h \ $(OBJDIR)/gzip_.c:$(OBJDIR)/gzip.h \ $(OBJDIR)/http_.c:$(OBJDIR)/http.h \ $(OBJDIR)/http_socket_.c:$(OBJDIR)/http_socket.h \ $(OBJDIR)/http_ssl_.c:$(OBJDIR)/http_ssl.h \ |
︙ | ︙ | |||
1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 | $(OBJDIR)/tar_.c:$(OBJDIR)/tar.h \ $(OBJDIR)/th_main_.c:$(OBJDIR)/th_main.h \ $(OBJDIR)/timeline_.c:$(OBJDIR)/timeline.h \ $(OBJDIR)/tkt_.c:$(OBJDIR)/tkt.h \ $(OBJDIR)/tktsetup_.c:$(OBJDIR)/tktsetup.h \ $(OBJDIR)/undo_.c:$(OBJDIR)/undo.h \ $(OBJDIR)/unicode_.c:$(OBJDIR)/unicode.h \ $(OBJDIR)/update_.c:$(OBJDIR)/update.h \ $(OBJDIR)/url_.c:$(OBJDIR)/url.h \ $(OBJDIR)/user_.c:$(OBJDIR)/user.h \ $(OBJDIR)/utf8_.c:$(OBJDIR)/utf8.h \ $(OBJDIR)/util_.c:$(OBJDIR)/util.h \ $(OBJDIR)/verify_.c:$(OBJDIR)/verify.h \ $(OBJDIR)/vfile_.c:$(OBJDIR)/vfile.h \ | > | 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 | $(OBJDIR)/tar_.c:$(OBJDIR)/tar.h \ $(OBJDIR)/th_main_.c:$(OBJDIR)/th_main.h \ $(OBJDIR)/timeline_.c:$(OBJDIR)/timeline.h \ $(OBJDIR)/tkt_.c:$(OBJDIR)/tkt.h \ $(OBJDIR)/tktsetup_.c:$(OBJDIR)/tktsetup.h \ $(OBJDIR)/undo_.c:$(OBJDIR)/undo.h \ $(OBJDIR)/unicode_.c:$(OBJDIR)/unicode.h \ $(OBJDIR)/unversioned_.c:$(OBJDIR)/unversioned.h \ $(OBJDIR)/update_.c:$(OBJDIR)/update.h \ $(OBJDIR)/url_.c:$(OBJDIR)/url.h \ $(OBJDIR)/user_.c:$(OBJDIR)/user.h \ $(OBJDIR)/utf8_.c:$(OBJDIR)/utf8.h \ $(OBJDIR)/util_.c:$(OBJDIR)/util.h \ $(OBJDIR)/verify_.c:$(OBJDIR)/verify.h \ $(OBJDIR)/vfile_.c:$(OBJDIR)/vfile.h \ |
︙ | ︙ | |||
1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 | $(OBJDIR)/diffcmd_.c: $(SRCDIR)/diffcmd.c $(TRANSLATE) $(TRANSLATE) $(SRCDIR)/diffcmd.c >$@ $(OBJDIR)/diffcmd.o: $(OBJDIR)/diffcmd_.c $(OBJDIR)/diffcmd.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/diffcmd.o -c $(OBJDIR)/diffcmd_.c $(OBJDIR)/diffcmd.h: $(OBJDIR)/headers $(OBJDIR)/doc_.c: $(SRCDIR)/doc.c $(TRANSLATE) $(TRANSLATE) $(SRCDIR)/doc.c >$@ $(OBJDIR)/doc.o: $(OBJDIR)/doc_.c $(OBJDIR)/doc.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/doc.o -c $(OBJDIR)/doc_.c | > > > > > > > > | 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 | $(OBJDIR)/diffcmd_.c: $(SRCDIR)/diffcmd.c $(TRANSLATE) $(TRANSLATE) $(SRCDIR)/diffcmd.c >$@ $(OBJDIR)/diffcmd.o: $(OBJDIR)/diffcmd_.c $(OBJDIR)/diffcmd.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/diffcmd.o -c $(OBJDIR)/diffcmd_.c $(OBJDIR)/diffcmd.h: $(OBJDIR)/headers $(OBJDIR)/dispatch_.c: $(SRCDIR)/dispatch.c $(TRANSLATE) $(TRANSLATE) $(SRCDIR)/dispatch.c >$@ $(OBJDIR)/dispatch.o: $(OBJDIR)/dispatch_.c $(OBJDIR)/dispatch.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/dispatch.o -c $(OBJDIR)/dispatch_.c $(OBJDIR)/dispatch.h: $(OBJDIR)/headers $(OBJDIR)/doc_.c: $(SRCDIR)/doc.c $(TRANSLATE) $(TRANSLATE) $(SRCDIR)/doc.c >$@ $(OBJDIR)/doc.o: $(OBJDIR)/doc_.c $(OBJDIR)/doc.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/doc.o -c $(OBJDIR)/doc_.c |
︙ | ︙ | |||
1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 | $(OBJDIR)/foci_.c: $(SRCDIR)/foci.c $(TRANSLATE) $(TRANSLATE) $(SRCDIR)/foci.c >$@ $(OBJDIR)/foci.o: $(OBJDIR)/foci_.c $(OBJDIR)/foci.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/foci.o -c $(OBJDIR)/foci_.c $(OBJDIR)/foci.h: $(OBJDIR)/headers $(OBJDIR)/fusefs_.c: $(SRCDIR)/fusefs.c $(TRANSLATE) $(TRANSLATE) $(SRCDIR)/fusefs.c >$@ $(OBJDIR)/fusefs.o: $(OBJDIR)/fusefs_.c $(OBJDIR)/fusefs.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/fusefs.o -c $(OBJDIR)/fusefs_.c | > > > > > > > > | 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 | $(OBJDIR)/foci_.c: $(SRCDIR)/foci.c $(TRANSLATE) $(TRANSLATE) $(SRCDIR)/foci.c >$@ $(OBJDIR)/foci.o: $(OBJDIR)/foci_.c $(OBJDIR)/foci.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/foci.o -c $(OBJDIR)/foci_.c $(OBJDIR)/foci.h: $(OBJDIR)/headers $(OBJDIR)/fshell_.c: $(SRCDIR)/fshell.c $(TRANSLATE) $(TRANSLATE) $(SRCDIR)/fshell.c >$@ $(OBJDIR)/fshell.o: $(OBJDIR)/fshell_.c $(OBJDIR)/fshell.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/fshell.o -c $(OBJDIR)/fshell_.c $(OBJDIR)/fshell.h: $(OBJDIR)/headers $(OBJDIR)/fusefs_.c: $(SRCDIR)/fusefs.c $(TRANSLATE) $(TRANSLATE) $(SRCDIR)/fusefs.c >$@ $(OBJDIR)/fusefs.o: $(OBJDIR)/fusefs_.c $(OBJDIR)/fusefs.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/fusefs.o -c $(OBJDIR)/fusefs_.c |
︙ | ︙ | |||
1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 | $(OBJDIR)/unicode_.c: $(SRCDIR)/unicode.c $(TRANSLATE) $(TRANSLATE) $(SRCDIR)/unicode.c >$@ $(OBJDIR)/unicode.o: $(OBJDIR)/unicode_.c $(OBJDIR)/unicode.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/unicode.o -c $(OBJDIR)/unicode_.c $(OBJDIR)/unicode.h: $(OBJDIR)/headers $(OBJDIR)/update_.c: $(SRCDIR)/update.c $(TRANSLATE) $(TRANSLATE) $(SRCDIR)/update.c >$@ $(OBJDIR)/update.o: $(OBJDIR)/update_.c $(OBJDIR)/update.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/update.o -c $(OBJDIR)/update_.c | > > > > > > > > | 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 | $(OBJDIR)/unicode_.c: $(SRCDIR)/unicode.c $(TRANSLATE) $(TRANSLATE) $(SRCDIR)/unicode.c >$@ $(OBJDIR)/unicode.o: $(OBJDIR)/unicode_.c $(OBJDIR)/unicode.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/unicode.o -c $(OBJDIR)/unicode_.c $(OBJDIR)/unicode.h: $(OBJDIR)/headers $(OBJDIR)/unversioned_.c: $(SRCDIR)/unversioned.c $(TRANSLATE) $(TRANSLATE) $(SRCDIR)/unversioned.c >$@ $(OBJDIR)/unversioned.o: $(OBJDIR)/unversioned_.c $(OBJDIR)/unversioned.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/unversioned.o -c $(OBJDIR)/unversioned_.c $(OBJDIR)/unversioned.h: $(OBJDIR)/headers $(OBJDIR)/update_.c: $(SRCDIR)/update.c $(TRANSLATE) $(TRANSLATE) $(SRCDIR)/update.c >$@ $(OBJDIR)/update.o: $(OBJDIR)/update_.c $(OBJDIR)/update.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/update.o -c $(OBJDIR)/update_.c |
︙ | ︙ | |||
2103 2104 2105 2106 2107 2108 2109 2110 | $(TRANSLATE) $(SRCDIR)/zip.c >$@ $(OBJDIR)/zip.o: $(OBJDIR)/zip_.c $(OBJDIR)/zip.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/zip.o -c $(OBJDIR)/zip_.c $(OBJDIR)/zip.h: $(OBJDIR)/headers SQLITE_OPTIONS = -DNDEBUG=1 \ | > > > > > > > > > > | > > < < | | 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 | $(TRANSLATE) $(SRCDIR)/zip.c >$@ $(OBJDIR)/zip.o: $(OBJDIR)/zip_.c $(OBJDIR)/zip.h $(SRCDIR)/config.h $(XTCC) -o $(OBJDIR)/zip.o -c $(OBJDIR)/zip_.c $(OBJDIR)/zip.h: $(OBJDIR)/headers MINGW_OPTIONS = -D_HAVE__MINGW_H SQLITE_OPTIONS = -DNDEBUG=1 \ -DSQLITE_THREADSAFE=0 \ -DSQLITE_DEFAULT_MEMSTATUS=0 \ -DSQLITE_DEFAULT_WAL_SYNCHRONOUS=1 \ -DSQLITE_LIKE_DOESNT_MATCH_BLOBS \ -DSQLITE_OMIT_DECLTYPE \ -DSQLITE_OMIT_DEPRECATED \ -DSQLITE_OMIT_PROGRESS_CALLBACK \ -DSQLITE_OMIT_SHARED_CACHE \ -DSQLITE_OMIT_LOAD_EXTENSION \ -DSQLITE_MAX_EXPR_DEPTH=0 \ -DSQLITE_USE_ALLOCA \ -DSQLITE_ENABLE_LOCKING_STYLE=0 \ -DSQLITE_DEFAULT_FILE_FORMAT=4 \ -DSQLITE_ENABLE_EXPLAIN_COMMENTS \ -DSQLITE_ENABLE_FTS4 \ -DSQLITE_ENABLE_FTS3_PARENTHESIS \ -DSQLITE_ENABLE_DBSTAT_VTAB \ -DSQLITE_ENABLE_JSON1 \ -DSQLITE_ENABLE_FTS5 \ -DSQLITE_WIN32_NO_ANSI \ $(MINGW_OPTIONS) \ -DSQLITE_USE_MALLOC_H \ -DSQLITE_USE_MSIZE SHELL_OPTIONS = -Dmain=sqlite3_shell \ -DSQLITE_SHELL_IS_UTF8=1 \ -DSQLITE_OMIT_LOAD_EXTENSION=1 \ -DUSE_SYSTEM_SQLITE=$(USE_SYSTEM_SQLITE) \ |
︙ | ︙ | |||
2143 2144 2145 2146 2147 2148 2149 | -c $(SQLITE3_SRC) -o $@ $(OBJDIR)/cson_amalgamation.o: $(SRCDIR)/cson_amalgamation.c $(XTCC) -c $(SRCDIR)/cson_amalgamation.c -o $@ $(OBJDIR)/json.o $(OBJDIR)/json_artifact.o $(OBJDIR)/json_branch.o $(OBJDIR)/json_config.o $(OBJDIR)/json_diff.o $(OBJDIR)/json_dir.o $(OBJDIR)/jsos_finfo.o $(OBJDIR)/json_login.o $(OBJDIR)/json_query.o $(OBJDIR)/json_report.o $(OBJDIR)/json_status.o $(OBJDIR)/json_tag.o $(OBJDIR)/json_timeline.o $(OBJDIR)/json_user.o $(OBJDIR)/json_wiki.o : $(SRCDIR)/json_detail.h | | | | 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 | -c $(SQLITE3_SRC) -o $@ $(OBJDIR)/cson_amalgamation.o: $(SRCDIR)/cson_amalgamation.c $(XTCC) -c $(SRCDIR)/cson_amalgamation.c -o $@ $(OBJDIR)/json.o $(OBJDIR)/json_artifact.o $(OBJDIR)/json_branch.o $(OBJDIR)/json_config.o $(OBJDIR)/json_diff.o $(OBJDIR)/json_dir.o $(OBJDIR)/jsos_finfo.o $(OBJDIR)/json_login.o $(OBJDIR)/json_query.o $(OBJDIR)/json_report.o $(OBJDIR)/json_status.o $(OBJDIR)/json_tag.o $(OBJDIR)/json_timeline.o $(OBJDIR)/json_user.o $(OBJDIR)/json_wiki.o : $(SRCDIR)/json_detail.h $(OBJDIR)/shell.o: $(SQLITE3_SHELL_SRC) $(SRCDIR)/sqlite3.h $(SRCDIR)/../win/Makefile.mingw.mistachkin $(XTCC) $(SHELL_OPTIONS) $(SHELL_CFLAGS) -c $(SQLITE3_SHELL_SRC) -o $@ $(OBJDIR)/th.o: $(SRCDIR)/th.c $(XTCC) -c $(SRCDIR)/th.c -o $@ $(OBJDIR)/th_lang.o: $(SRCDIR)/th_lang.c $(XTCC) -c $(SRCDIR)/th_lang.c -o $@ $(OBJDIR)/th_tcl.o: $(SRCDIR)/th_tcl.c $(XTCC) -c $(SRCDIR)/th_tcl.c -o $@ $(OBJDIR)/miniz.o: $(SRCDIR)/miniz.c $(XTCC) $(MINIZ_OPTIONS) -c $(SRCDIR)/miniz.c -o $@ |
Changes to win/Makefile.msc.
︙ | ︙ | |||
96 97 98 99 100 101 102 | # Enable support for the SQLite Encryption Extension? !ifndef USE_SEE USE_SEE = 0 !endif !if $(FOSSIL_ENABLE_SSL)!=0 | | | | 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 | # Enable support for the SQLite Encryption Extension? !ifndef USE_SEE USE_SEE = 0 !endif !if $(FOSSIL_ENABLE_SSL)!=0 SSLDIR = $(B)\compat\openssl-1.0.2j SSLINCDIR = $(SSLDIR)\inc32 !if $(FOSSIL_DYNAMIC_BUILD)!=0 SSLLIBDIR = $(SSLDIR)\out32dll !else SSLLIBDIR = $(SSLDIR)\out32 !endif SSLLFLAGS = /nologo /opt:ref /debug SSLLIB = ssleay32.lib libeay32.lib user32.lib gdi32.lib crypt32.lib !if "$(PLATFORM)"=="amd64" || "$(PLATFORM)"=="x64" !message Using 'x64' platform for OpenSSL... # BUGBUG (OpenSSL): Using "no-ssl*" here breaks the build. # SSLCONFIG = VC-WIN64A no-asm no-ssl2 no-ssl3 SSLCONFIG = VC-WIN64A no-asm !if $(FOSSIL_DYNAMIC_BUILD)!=0 SSLCONFIG = $(SSLCONFIG) shared |
︙ | ︙ | |||
310 311 312 313 314 315 316 | !if $(USE_SEE)!=0 TCC = $(TCC) /DUSE_SEE=1 RCC = $(RCC) /DUSE_SEE=1 !endif SQLITE_OPTIONS = /DNDEBUG=1 \ | > > > > > > > > | > > < < | 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 | !if $(USE_SEE)!=0 TCC = $(TCC) /DUSE_SEE=1 RCC = $(RCC) /DUSE_SEE=1 !endif SQLITE_OPTIONS = /DNDEBUG=1 \ /DSQLITE_THREADSAFE=0 \ /DSQLITE_DEFAULT_MEMSTATUS=0 \ /DSQLITE_DEFAULT_WAL_SYNCHRONOUS=1 \ /DSQLITE_LIKE_DOESNT_MATCH_BLOBS \ /DSQLITE_OMIT_DECLTYPE \ /DSQLITE_OMIT_DEPRECATED \ /DSQLITE_OMIT_PROGRESS_CALLBACK \ /DSQLITE_OMIT_SHARED_CACHE \ /DSQLITE_OMIT_LOAD_EXTENSION \ /DSQLITE_MAX_EXPR_DEPTH=0 \ /DSQLITE_USE_ALLOCA \ /DSQLITE_ENABLE_LOCKING_STYLE=0 \ /DSQLITE_DEFAULT_FILE_FORMAT=4 \ /DSQLITE_ENABLE_EXPLAIN_COMMENTS \ /DSQLITE_ENABLE_FTS4 \ /DSQLITE_ENABLE_FTS3_PARENTHESIS \ /DSQLITE_ENABLE_DBSTAT_VTAB \ /DSQLITE_ENABLE_JSON1 \ /DSQLITE_ENABLE_FTS5 \ /DSQLITE_WIN32_NO_ANSI |
︙ | ︙ | |||
363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 | content_.c \ db_.c \ delta_.c \ deltacmd_.c \ descendants_.c \ diff_.c \ diffcmd_.c \ doc_.c \ encode_.c \ event_.c \ export_.c \ file_.c \ finfo_.c \ foci_.c \ fusefs_.c \ glob_.c \ graph_.c \ gzip_.c \ http_.c \ http_socket_.c \ http_ssl_.c \ | > > | 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 | content_.c \ db_.c \ delta_.c \ deltacmd_.c \ descendants_.c \ diff_.c \ diffcmd_.c \ dispatch_.c \ doc_.c \ encode_.c \ event_.c \ export_.c \ file_.c \ finfo_.c \ foci_.c \ fshell_.c \ fusefs_.c \ glob_.c \ graph_.c \ gzip_.c \ http_.c \ http_socket_.c \ http_ssl_.c \ |
︙ | ︙ | |||
441 442 443 444 445 446 447 448 449 450 451 452 453 454 | tar_.c \ th_main_.c \ timeline_.c \ tkt_.c \ tktsetup_.c \ undo_.c \ unicode_.c \ update_.c \ url_.c \ user_.c \ utf8_.c \ util_.c \ verify_.c \ vfile_.c \ | > | 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 | tar_.c \ th_main_.c \ timeline_.c \ tkt_.c \ tktsetup_.c \ undo_.c \ unicode_.c \ unversioned_.c \ update_.c \ url_.c \ user_.c \ utf8_.c \ util_.c \ verify_.c \ vfile_.c \ |
︙ | ︙ | |||
534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 | $(OX)\cson_amalgamation$O \ $(OX)\db$O \ $(OX)\delta$O \ $(OX)\deltacmd$O \ $(OX)\descendants$O \ $(OX)\diff$O \ $(OX)\diffcmd$O \ $(OX)\doc$O \ $(OX)\encode$O \ $(OX)\event$O \ $(OX)\export$O \ $(OX)\file$O \ $(OX)\finfo$O \ $(OX)\foci$O \ $(OX)\fusefs$O \ $(OX)\glob$O \ $(OX)\graph$O \ $(OX)\gzip$O \ $(OX)\http$O \ $(OX)\http_socket$O \ $(OX)\http_ssl$O \ | > > | 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 | $(OX)\cson_amalgamation$O \ $(OX)\db$O \ $(OX)\delta$O \ $(OX)\deltacmd$O \ $(OX)\descendants$O \ $(OX)\diff$O \ $(OX)\diffcmd$O \ $(OX)\dispatch$O \ $(OX)\doc$O \ $(OX)\encode$O \ $(OX)\event$O \ $(OX)\export$O \ $(OX)\file$O \ $(OX)\finfo$O \ $(OX)\foci$O \ $(OX)\fshell$O \ $(OX)\fusefs$O \ $(OX)\glob$O \ $(OX)\graph$O \ $(OX)\gzip$O \ $(OX)\http$O \ $(OX)\http_socket$O \ $(OX)\http_ssl$O \ |
︙ | ︙ | |||
617 618 619 620 621 622 623 624 625 626 627 628 629 630 | $(OX)\th_main$O \ $(OX)\th_tcl$O \ $(OX)\timeline$O \ $(OX)\tkt$O \ $(OX)\tktsetup$O \ $(OX)\undo$O \ $(OX)\unicode$O \ $(OX)\update$O \ $(OX)\url$O \ $(OX)\user$O \ $(OX)\utf8$O \ $(OX)\util$O \ $(OX)\verify$O \ $(OX)\vfile$O \ | > | 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 | $(OX)\th_main$O \ $(OX)\th_tcl$O \ $(OX)\timeline$O \ $(OX)\tkt$O \ $(OX)\tktsetup$O \ $(OX)\undo$O \ $(OX)\unicode$O \ $(OX)\unversioned$O \ $(OX)\update$O \ $(OX)\url$O \ $(OX)\user$O \ $(OX)\utf8$O \ $(OX)\util$O \ $(OX)\verify$O \ $(OX)\vfile$O \ |
︙ | ︙ | |||
714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 | echo $(OX)\cson_amalgamation.obj >> $@ echo $(OX)\db.obj >> $@ echo $(OX)\delta.obj >> $@ echo $(OX)\deltacmd.obj >> $@ echo $(OX)\descendants.obj >> $@ echo $(OX)\diff.obj >> $@ echo $(OX)\diffcmd.obj >> $@ echo $(OX)\doc.obj >> $@ echo $(OX)\encode.obj >> $@ echo $(OX)\event.obj >> $@ echo $(OX)\export.obj >> $@ echo $(OX)\file.obj >> $@ echo $(OX)\finfo.obj >> $@ echo $(OX)\foci.obj >> $@ echo $(OX)\fusefs.obj >> $@ echo $(OX)\glob.obj >> $@ echo $(OX)\graph.obj >> $@ echo $(OX)\gzip.obj >> $@ echo $(OX)\http.obj >> $@ echo $(OX)\http_socket.obj >> $@ echo $(OX)\http_ssl.obj >> $@ | > > | 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 | echo $(OX)\cson_amalgamation.obj >> $@ echo $(OX)\db.obj >> $@ echo $(OX)\delta.obj >> $@ echo $(OX)\deltacmd.obj >> $@ echo $(OX)\descendants.obj >> $@ echo $(OX)\diff.obj >> $@ echo $(OX)\diffcmd.obj >> $@ echo $(OX)\dispatch.obj >> $@ echo $(OX)\doc.obj >> $@ echo $(OX)\encode.obj >> $@ echo $(OX)\event.obj >> $@ echo $(OX)\export.obj >> $@ echo $(OX)\file.obj >> $@ echo $(OX)\finfo.obj >> $@ echo $(OX)\foci.obj >> $@ echo $(OX)\fshell.obj >> $@ echo $(OX)\fusefs.obj >> $@ echo $(OX)\glob.obj >> $@ echo $(OX)\graph.obj >> $@ echo $(OX)\gzip.obj >> $@ echo $(OX)\http.obj >> $@ echo $(OX)\http_socket.obj >> $@ echo $(OX)\http_ssl.obj >> $@ |
︙ | ︙ | |||
797 798 799 800 801 802 803 804 805 806 807 808 809 810 | echo $(OX)\th_main.obj >> $@ echo $(OX)\th_tcl.obj >> $@ echo $(OX)\timeline.obj >> $@ echo $(OX)\tkt.obj >> $@ echo $(OX)\tktsetup.obj >> $@ echo $(OX)\undo.obj >> $@ echo $(OX)\unicode.obj >> $@ echo $(OX)\update.obj >> $@ echo $(OX)\url.obj >> $@ echo $(OX)\user.obj >> $@ echo $(OX)\utf8.obj >> $@ echo $(OX)\util.obj >> $@ echo $(OX)\verify.obj >> $@ echo $(OX)\vfile.obj >> $@ | > | 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 | echo $(OX)\th_main.obj >> $@ echo $(OX)\th_tcl.obj >> $@ echo $(OX)\timeline.obj >> $@ echo $(OX)\tkt.obj >> $@ echo $(OX)\tktsetup.obj >> $@ echo $(OX)\undo.obj >> $@ echo $(OX)\unicode.obj >> $@ echo $(OX)\unversioned.obj >> $@ echo $(OX)\update.obj >> $@ echo $(OX)\url.obj >> $@ echo $(OX)\user.obj >> $@ echo $(OX)\utf8.obj >> $@ echo $(OX)\util.obj >> $@ echo $(OX)\verify.obj >> $@ echo $(OX)\vfile.obj >> $@ |
︙ | ︙ | |||
838 839 840 841 842 843 844 | mkversion$E: $(SRCDIR)\mkversion.c $(BCC) $** codecheck1$E: $(SRCDIR)\codecheck1.c $(BCC) $** | > > > | > > > | | 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 | mkversion$E: $(SRCDIR)\mkversion.c $(BCC) $** codecheck1$E: $(SRCDIR)\codecheck1.c $(BCC) $** !if $(USE_SEE)!=0 SQLITE3_SHELL_SRC = $(SRCDIR)\shell-see.c !else SQLITE3_SHELL_SRC = $(SRCDIR)\shell.c !endif $(OX)\shell$O : $(SQLITE3_SHELL_SRC) $B\win\Makefile.msc $(TCC) /Fo$@ $(SHELL_OPTIONS) $(SQLITE_OPTIONS) $(SHELL_CFLAGS) -c $(SQLITE3_SHELL_SRC) !if $(USE_SEE)!=0 SQLITE3_SRC = $(SRCDIR)\sqlite3-see.c !else SQLITE3_SRC = $(SRCDIR)\sqlite3.c !endif |
︙ | ︙ | |||
1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 | translate$E $** > $@ $(OX)\diffcmd$O : diffcmd_.c diffcmd.h $(TCC) /Fo$@ -c diffcmd_.c diffcmd_.c : $(SRCDIR)\diffcmd.c translate$E $** > $@ $(OX)\doc$O : doc_.c doc.h $(TCC) /Fo$@ -c doc_.c doc_.c : $(SRCDIR)\doc.c translate$E $** > $@ | > > > > > > | 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 | translate$E $** > $@ $(OX)\diffcmd$O : diffcmd_.c diffcmd.h $(TCC) /Fo$@ -c diffcmd_.c diffcmd_.c : $(SRCDIR)\diffcmd.c translate$E $** > $@ $(OX)\dispatch$O : dispatch_.c dispatch.h $(TCC) /Fo$@ -c dispatch_.c dispatch_.c : $(SRCDIR)\dispatch.c translate$E $** > $@ $(OX)\doc$O : doc_.c doc.h $(TCC) /Fo$@ -c doc_.c doc_.c : $(SRCDIR)\doc.c translate$E $** > $@ |
︙ | ︙ | |||
1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 | translate$E $** > $@ $(OX)\foci$O : foci_.c foci.h $(TCC) /Fo$@ -c foci_.c foci_.c : $(SRCDIR)\foci.c translate$E $** > $@ $(OX)\fusefs$O : fusefs_.c fusefs.h $(TCC) /Fo$@ -c fusefs_.c fusefs_.c : $(SRCDIR)\fusefs.c translate$E $** > $@ | > > > > > > | 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 | translate$E $** > $@ $(OX)\foci$O : foci_.c foci.h $(TCC) /Fo$@ -c foci_.c foci_.c : $(SRCDIR)\foci.c translate$E $** > $@ $(OX)\fshell$O : fshell_.c fshell.h $(TCC) /Fo$@ -c fshell_.c fshell_.c : $(SRCDIR)\fshell.c translate$E $** > $@ $(OX)\fusefs$O : fusefs_.c fusefs.h $(TCC) /Fo$@ -c fusefs_.c fusefs_.c : $(SRCDIR)\fusefs.c translate$E $** > $@ |
︙ | ︙ | |||
1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 | translate$E $** > $@ $(OX)\unicode$O : unicode_.c unicode.h $(TCC) /Fo$@ -c unicode_.c unicode_.c : $(SRCDIR)\unicode.c translate$E $** > $@ $(OX)\update$O : update_.c update.h $(TCC) /Fo$@ -c update_.c update_.c : $(SRCDIR)\update.c translate$E $** > $@ | > > > > > > | 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 | translate$E $** > $@ $(OX)\unicode$O : unicode_.c unicode.h $(TCC) /Fo$@ -c unicode_.c unicode_.c : $(SRCDIR)\unicode.c translate$E $** > $@ $(OX)\unversioned$O : unversioned_.c unversioned.h $(TCC) /Fo$@ -c unversioned_.c unversioned_.c : $(SRCDIR)\unversioned.c translate$E $** > $@ $(OX)\update$O : update_.c update.h $(TCC) /Fo$@ -c update_.c update_.c : $(SRCDIR)\update.c translate$E $** > $@ |
︙ | ︙ | |||
1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 | content_.c:content.h \ db_.c:db.h \ delta_.c:delta.h \ deltacmd_.c:deltacmd.h \ descendants_.c:descendants.h \ diff_.c:diff.h \ diffcmd_.c:diffcmd.h \ doc_.c:doc.h \ encode_.c:encode.h \ event_.c:event.h \ export_.c:export.h \ file_.c:file.h \ finfo_.c:finfo.h \ foci_.c:foci.h \ fusefs_.c:fusefs.h \ glob_.c:glob.h \ graph_.c:graph.h \ gzip_.c:gzip.h \ http_.c:http.h \ http_socket_.c:http_socket.h \ http_ssl_.c:http_ssl.h \ | > > | 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 | content_.c:content.h \ db_.c:db.h \ delta_.c:delta.h \ deltacmd_.c:deltacmd.h \ descendants_.c:descendants.h \ diff_.c:diff.h \ diffcmd_.c:diffcmd.h \ dispatch_.c:dispatch.h \ doc_.c:doc.h \ encode_.c:encode.h \ event_.c:event.h \ export_.c:export.h \ file_.c:file.h \ finfo_.c:finfo.h \ foci_.c:foci.h \ fshell_.c:fshell.h \ fusefs_.c:fusefs.h \ glob_.c:glob.h \ graph_.c:graph.h \ gzip_.c:gzip.h \ http_.c:http.h \ http_socket_.c:http_socket.h \ http_ssl_.c:http_ssl.h \ |
︙ | ︙ | |||
1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 | tar_.c:tar.h \ th_main_.c:th_main.h \ timeline_.c:timeline.h \ tkt_.c:tkt.h \ tktsetup_.c:tktsetup.h \ undo_.c:undo.h \ unicode_.c:unicode.h \ update_.c:update.h \ url_.c:url.h \ user_.c:user.h \ utf8_.c:utf8.h \ util_.c:util.h \ verify_.c:verify.h \ vfile_.c:vfile.h \ | > | 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 | tar_.c:tar.h \ th_main_.c:th_main.h \ timeline_.c:timeline.h \ tkt_.c:tkt.h \ tktsetup_.c:tktsetup.h \ undo_.c:undo.h \ unicode_.c:unicode.h \ unversioned_.c:unversioned.h \ update_.c:update.h \ url_.c:url.h \ user_.c:user.h \ utf8_.c:utf8.h \ util_.c:util.h \ verify_.c:verify.h \ vfile_.c:vfile.h \ |
︙ | ︙ |
Added www/aboutcgi.wiki.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 | <title>How CGI Works In Fossil</title> <h2>Introduction</h2><blockquote> <p>CGI or "Common Gateway Interface" is a venerable yet reliable technique for generating dynamic web content. This article gives a quick background on how CGI works and describes how Fossil can act as a CGI service. <p>This is a "how it works" guide. If you just want to set up Fossil as a CGI server, see the [./server.wiki | Fossil Server Setup] page. </blockquote> <h2>A Quick Review Of CGI</h2><blockquote> <p> An HTTP request is a block of text that is sent by a client application (usually a web browser) and arrives at the web server over a network connection. The HTTP request contains a URL that describes the information being requested. The URL in the HTTP request is typically the same URL that appears in the URL bar at the top of the web browser that is making the request. The URL might contain a "?" character followed query parameters. The HTTP will usually also contain other information such as the name of the application that made the request, whether or not the requesting application can except a compressed reply, POST parameters from forms, and so forth. <p> The job of the web server is to interpret the HTTP request and formulate an appropriate reply. The web server is free to interpret the HTTP request in any way it wants. But most web servers follow a similar pattern, described below. (Note: details may vary from one web server to another.) <p> Suppose the URL in the HTTP request looks like this: <blockquote><b>/one/two/timeline/four</b></blockquote> Most web servers will search their content area for files that match some prefix of the URL. The search starts with <b>/one</b>, then goes to <b>/one/two</b>, then <b>/one/two/timeline</b>, and finally <b>/one/two/timeline/four</b> is checked. The search stops at the first match. <p> Suppose the first match is <b>/one/two</b>. If <b>/one/two</b> is an ordinary file in the content area, then that file is returned as static content. The "<b>/timeline/four</b>" suffix is silently ignored. <p> If <b>/one/two</b> is a CGI script (or program), then the web server executes the <b>/one/two</b> script. The output generated by the script is collected and repackaged as the HTTP reply. <p> Before executing the CGI script, the web server will set up various environment variables with information useful to the CGI script: <table border=1 cellpadding=5> <tr><th>Environment<br>Variable<th>Meaning <tr><td>GATEWAY_INTERFACE<td>Always set to "CGI/1.0" <tr><td>REQUEST_URI <td>The input URL from the HTTP request. <tr><td>SCRIPT_NAME <td>The prefix of the input URL that matches the CGI script name. In this example: "/one/two". <tr><td>PATH_INFO <td>The suffix of the URL beyond the name of the CGI script. In this example: "timeline/four". <tr><td>QUERY_STRING <td>The query string that follows the "?" in the URL, if there is one. </table> <p> There are other CGI environment variables beyond those listed above. Many Fossil servers implement the [https://www.fossil-scm.org/fossil/test_env/two/three?abc=xyz|test_env] webpage that shows some of the CGI environment variables that Fossil pays attention to. <p> In addition to setting various CGI environment variables, if the HTTP request contains POST content, then the web server relays the POST content to standard input of the CGI script. <p> In summary, the task of the CGI script is to read the various CGI environment variables and the POST content on standard input (if any), figure out an appropriate reply, then write that reply on standard output. The web server will read the output from the CGI script, reformat it into an appropriate HTTP reply, and relay the result back to the requesting application. The CGI script exits as soon as it generates a single reply. The web server will (usually) persist and handle multiple HTTP requests, but a CGI script handles just one HTTP request and then exits. <p> The above is a rough outline of how CGI works. There are many details omitted from this brief discussion. See other on-line CGI tutorials for further information. </blockquote> <h2>How Fossil Acts As A CGI Program</h2> <blockquote> An appropriate CGI script for running Fossil will look something like the following: <blockquote><pre> #!/usr/bin/fossil repository: /home/www/repos/project.fossil </pre></blockquote> The first line of the script is a "[https://en.wikipedia.org/wiki/Shebang_%28Unix%29|shebang]" that tells the operating system what program to use as the interpreter for this script. On unix, when you execute a script that starts with a shebang, the operating system runs the program identified by the shebang with a single argument that is the full pathname of the script itself. In our example, the interpreter is Fossil, and the argument might be something like "/var/www/cgi-bin/one/two" (depending on how your particular web server is configured). <p> The Fossil program that is run as the script interpreter is the same Fossil that runs when you type ordinary Fossil commands like "fossil sync" or "fossil commit". But in this case, as soon as it launches, the Fossil program recognizes that the GATEWAY_INTERFACE environment variable is set to "CGI/1.0" and it therefore knows that it is being used as CGI rather than as an ordinary command-line tool, and behaves accordingly. <p> When Fossil recognizes that it is being run as CGI, it opens and reads the file identified by its sole argument (the file named by <code>argv[1]</code>). In our example, the second line of that file tells Fossil the location of the repository it will be serving. Fossil then starts looking at the CGI environment variables to figure out what web page is being requested, generates that one web page, then exits. <p> Usually, the webpage being requested is the first term of the PATH_INFO environment variable. (Exceptions to this rule are noted in the sequel.) For our example, the first term of PATH_INFO is "timeline", which means that Fossil will generate the [/help?cmd=/timeline|/timeline] webpage. <p> With Fossil, terms of PATH_INFO beyond the webpage name are converted into the "name" query parameter. Hence, the following two URLs mean exactly the same thing to Fossil: <ol type='A'> <li> [https://www.fossil-scm.org/fossil/info/c14ecc43] <li> [https://www.fossil-scm.org/fossil/info?name=c14ecc43] </ol> In both cases, the CGI script is called "/fossil". For case (A), the PATH_INFO variable will be "info/c14ecc43" and so the "[/help?cmd=/info|/info]" webpage will be generated and the suffix of PATH_INFO will be converted into the "name" query parameter, which identifies the artifact about which information is requested. In case (B), the PATH_INFO is just "info", but the same "name" query parameter is set explicitly by the URL itself. </blockquote> <h2>Serving Multiple Fossil Repositories From One CGI Script</h2> <blockquote> The previous example showed how to serve a single Fossil repository using a single CGI script. On a website that wants to server multiple repositories, one could simply create multiple CGI scripts, one script for each repository. But it is also possible to serve multiple Fossil repositories from a single CGI script. <p> If the CGI script for Fossil contains a "directory:" line instead of a "repository:" line, then the argument to "directory:" is the name of a directory that contains multiple repository files, each ending with ".fossil". For example: <blockquote><pre> #!/usr/bin/fossil directory: /home/www/repos </pre></blockquote> Suppose the /home/www/repos directory contains files named <b>one.fossil</b>, <b>two.fossil</b>, and <b>subdir/three.fossil</b>. Further suppose that the name of the CGI script (relative to the root of the webserver document area) is "cgis/example2". Then to see the timeline for the "three.fossil" repository, the URL would be: <blockquote> <b>http://example.com/cgis/example2/subdir/three/timeline</b> </blockquote> Here is what happens: <ol> <li> The input URI on the HTTP request is <b>/cgis/example2/subdir/three/timeline</b> <li> The web server searches prefixes of the input URI until it finds the "cgis/example2" script. The web server then sets PATH_INFO to the "subdir/three/timeline" suffix and invokes the "cgis/example2" script. <li> Fossil runs and sees the "directory:" line pointing to "/home/www/repos". Fossil then starts pulling terms off the front of the PATH_INFO looking for a repository. It first looks at "/home/www/resps/subdir.fossil" but there is no such repository. So then it looks at "/home/www/repos/subdir/three.fossil" and finds a repository. The PATH_INFO is shortened by removing "subdir/three/" leaving it at just "timeline". <li> Fossil looks at the rest of PATH_INFO to see that the webpage requested is "timeline". </ol> </blockquote> <h2>Additional Observations</h2> <blockquote><ol type="I"> <li><p> Fossil does not distinguish between the various HTTP methods (GET, PUT, DELETE, etc). Fossil figures out what it needs to do purely from the webpage term of the URI. <li><p> Fossil does not distinguish between query parameters that are part of the URI, application/x-www-form-urlencoded or multipart/form-data encoded parameter that are part of the POST content, and cookies. Each information source is seen as a space of key/value pairs which are loaded into an internal property hash table. The code that runs to generate the reply can then reference various properties values. Fossil does not care where the value of each property comes from (POST content, cookies, or query parameters) only that the property exists and has a value. <li><p> The "[/help?cmd=ui|fossil ui]" and "[/help?cmd=server|fossil server]" commands are implemented using a simple built-in web server that accepts incoming HTTP requests, translates each request into a CGI invocation, then creates a separate child Fossil process to handle each request. In other words, CGI is used internally to implement "fossil ui/server". <p> SCGI is processed using the same built-in web server, just modified to parse SCGI requests instead of HTTP requests. Each SCGI request is converted into CGI, then Fossil creates a separate child Fossil process to handle each CGI request. </ol> </blockquote> |
Changes to www/adding_code.wiki.
︙ | ︙ | |||
20 21 22 23 24 25 26 | for special comments that contain "help" text and which identify routines that implement specific commands or which generate particular web pages. 2. The <b>makeheaders</b> preprocessor generates all the ".h" files automatically. Fossil programmers write ".c" files only and let the makeheaders preprocessor create the ".h" files. | | | | | | 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 | for special comments that contain "help" text and which identify routines that implement specific commands or which generate particular web pages. 2. The <b>makeheaders</b> preprocessor generates all the ".h" files automatically. Fossil programmers write ".c" files only and let the makeheaders preprocessor create the ".h" files. 3. The <b>translate</b> preprocessor converts source code lines that begin with "@" into string literals, or into print statements that generate web page output, depending on context. The [./makefile.wiki|Makefile] for Fossil takes care of running these preprocessors with all the right arguments and in the right order. So it is not necessary to understand the details of how these preprocessors work. (Though, the sources for all three preprocessors are included in the source tree and are well commented, if you want to dig deeper.) It is only necessary to know that these preprocessors exist and hence will effect the way you write code. <h2>3.0 Adding New Source Code Files</h2> New source code files are added in the "src/" subdirectory of the Fossil source tree. Suppose one wants to add a new source code file named "xyzzy.c". The first step is to add this file to the various makefiles. Do so by editing the file src/makemake.tcl and adding "xyzzy" (without the final ".c") to the list of source modules at the top of that script. Save the result and then run the makemake.tcl script using a TCL interpreter. The command to run the makemake.tcl script is: <b>tclsh makemake.tcl</b> The working directory must be src/ when the command above is run. Note that TCL is not normally required to build Fossil, but it is required for this step. If you do not have a TCL interpreter on your system already, they are easy to install. A popular choice is the [http://www.activestate.com/activetcl|Active Tcl] installation from ActiveState. After the makefiles have been updated, create the xyzzy.c source file from the following template: <blockquote><verbatim> /* ** Copyright boilerplate goes here. ***************************************************** ** High-level description of what this module goes ** here. */ #include "config.h" #include "xyzzy.h" #if INTERFACE /* Exported object (structure) definitions or #defines |
︙ | ︙ | |||
81 82 83 84 85 86 87 | normal Fossil source file must have a #include at the top that imports its private header file. (Some source files, such as "sqlite3.c" are exceptions to this rule. Don't worry about those exceptions. The files you write will require this #include line.) The "#if INTERFACE ... #endif" section is optional and is only needed if there are structure definitions or typedefs or macros that need to | | | 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 | normal Fossil source file must have a #include at the top that imports its private header file. (Some source files, such as "sqlite3.c" are exceptions to this rule. Don't worry about those exceptions. The files you write will require this #include line.) The "#if INTERFACE ... #endif" section is optional and is only needed if there are structure definitions or typedefs or macros that need to be used by other source code files. The makeheaders preprocessor uses definitions in the INTERFACE section to help it generate header files. See [../src/makeheaders.html | makeheaders.html] for additional information. After creating a template file such as shown above, and after updating the makefiles, you should be able to recompile Fossil and have it include your new source file, even before you source file contains any code. |
︙ | ︙ |
Changes to www/antibot.wiki.
1 2 3 4 | <title>Defense Against Spiders</title> The website presented by a Fossil server has many hyperlinks. Even a modest project can have millions of pages in its | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 | <title>Defense Against Spiders</title> The website presented by a Fossil server has many hyperlinks. Even a modest project can have millions of pages in its tree, and many of those pages (for example diffs and annotations and ZIP archive of older check-ins) can be expensive to compute. If a spider or bot tries to walk a website implemented by Fossil, it can present a crippling bandwidth and CPU load. The website presented by a Fossil server is intended to be used interactively by humans, not walked by spiders. This article describes the techniques used by Fossil to try to welcome human users while keeping out spiders. <h2>The "hyperlink" user capability</h2> Every Fossil web session has a "user". For random passers-by on the internet (and for spiders) that user is "nobody". The "anonymous" user is also available for humans who do not wish to identify themselves. The difference is that "anonymous" requires a login (using a password supplied via a CAPTCHA) whereas "nobody" does not require a login. The site administrator can also create logins with passwords for specific individuals. The "h" or "hyperlink" capability is a permission that can be granted to users that enables the display of hyperlinks. Most of the hyperlinks generated by Fossil are suppressed if this capability is missing. So one simple defense against spiders is to disable the "h" permission for the "nobody" user. This means that users must log in (perhaps as "anonymous") before they can see any of the hyperlinks. Spiders do not normally attempt to log into websites and will therefore not see most of the hyperlinks and will not try to walk the millions of historical check-ins and diffs available on a Fossil-generated website. If the "h" capability is missing from user "nobody" but is present for user "anonymous", then a message automatically appears at the top of each page inviting the user to log in as anonymous in order to activate hyperlinks. Removing the "h" capability from user "nobody" is an effective means of preventing spiders from walking a Fossil-generated website. But it can also be annoying to humans, since it requires them to log in. Hence, Fossil provides other techniques for blocking spiders which are less cumbersome to humans. <h2>Automatic hyperlinks based on UserAgent</h2> Fossil has the ability to selectively enable hyperlinks for users that lack the "h" capability based on their UserAgent string in the HTTP request header and on the browsers ability to run Javascript. |
︙ | ︙ | |||
92 93 94 95 96 97 98 | UserAgent string. Most spiders do not bother to run javascript and so to the spider the empty anchor tag will be useless. But all modern web browsers implement javascript, so hyperlinks will appears normally for human users. <h2>Further defenses</h2> | | | | | | 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 | UserAgent string. Most spiders do not bother to run javascript and so to the spider the empty anchor tag will be useless. But all modern web browsers implement javascript, so hyperlinks will appears normally for human users. <h2>Further defenses</h2> Recently (as of this writing, in the spring of 2013) the Fossil server on the SQLite website ([http://www.sqlite.org/src/]) has been hit repeatedly by Chinese spiders that use forged UserAgent strings to make them look like normal web browsers and which interpret javascript. We do not believe these attacks to be nefarious since SQLite is public domain and the attackers could obtain all information they ever wanted to know about SQLite simply by cloning the repository. Instead, we believe these "attacks" are coming from "script kiddies". But regardless of whether or not malice is involved, these attacks do present an unnecessary load on the server which reduces the responsiveness of the SQLite website for well-behaved and socially responsible users. For this reason, additional defenses against spiders have been put in place. On the Admin/Access page of Fossil, just below the "<b>Enable hyperlinks for "nobody" based on User-Agent and Javascript</b>" setting, there are now two additional subsettings that can be optionally enabled to control hyperlinks. The first subsetting waits to run the javascript that sets the "href=" attributes on anchor tags until after at least one "mouseover" event has been detected on the <body> |
︙ | ︙ | |||
135 136 137 138 139 140 141 | <h2>The ongoing struggle</h2> Fossil currently does a very good job of providing easy access to humans while keeping out troublesome robots and spiders. However, spiders and bots continue to grow more sophisticated, requiring ever more advanced defenses. This "arms race" is unlikely to ever end. The developers of Fossil will continue to try improve the spider defenses of Fossil so | | | | 135 136 137 138 139 140 141 142 143 144 145 146 147 | <h2>The ongoing struggle</h2> Fossil currently does a very good job of providing easy access to humans while keeping out troublesome robots and spiders. However, spiders and bots continue to grow more sophisticated, requiring ever more advanced defenses. This "arms race" is unlikely to ever end. The developers of Fossil will continue to try improve the spider defenses of Fossil so check back from time to time for the latest releases and updates. Readers of this page who have suggestions on how to improve the spider defenses in Fossil are invited to submit your ideas to the Fossil Users mailing list: [mailto:fossil-users@lists.fossil-scm.org | fossil-users@lists.fossil-scm.org]. |
Changes to www/blame.wiki.
︙ | ︙ | |||
16 17 18 19 20 21 22 | <ol type='1'> <li>Locate the check-in that contains the file that is to be annotated. Call this check-in C0. <li>Find all direct ancestors of C0. A direct ancestor is the closure of the primary parent of C0. Merged in branches are not part of the direct ancestors of C0. | | | | | | 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 | <ol type='1'> <li>Locate the check-in that contains the file that is to be annotated. Call this check-in C0. <li>Find all direct ancestors of C0. A direct ancestor is the closure of the primary parent of C0. Merged in branches are not part of the direct ancestors of C0. <li>Prune the list of ancestors of C0 so that it contains only check-in in which the file to be annotated was modified. <li>Load the complete text of the file to be annotated from check-in C0. Call this version of the file F0. <li>Parse F0 into lines. Mark each line as "unchanged". <li>For each ancestor of C0 on the pruned list (call the ancestor CX), beginning with the most recent ancestor and moving toward the oldest ancestor, do the following steps: <ol type='a'> <li>Load the text for the file to be annotated as it existed in check-in CX. Call this text FX. <li>Compute a diff going from FX to F0. <li>For each line of F0 that is changed in the diff and which was previously marked "unchanged", update the mark to indicated that line was modified by CX. </ol> <li>Show each line of F0 together with its change mark, appropriately formatted. </ol> <h2>3.0 Discussion and Notes</h2> The time-consuming part of this algorithm is step 6b - computing the diff from all historical versions of the file to the version of the file under analysis. For a large file that has many historical changes, this can take several seconds. For this reason, the default [/help?cmd=/annotate|/annotate] webpage only shows those lines that where changed by the 20 most recent modifications to the file. This allows the loop on step 6 to terminate after only 19 diffs instead of the hundreds or thousands of diffs that might be required for a frequently modified file. As currently implemented (as of 2015-12-12) the annotate algorithm does not follow files across name changes. File name change information is available in the database, and so the algorithm could be enhanced to follow files across name changes by modifications to step 3. Step 2 is interesting in that it is [/artifact/6cb824a0417?ln=196-201 | implemented] using a [https://www.sqlite.org/lang_with.html#recursivecte|recursive common table expression]. |
Changes to www/branching.wiki.
1 2 3 4 5 6 | <title>Branching, Forking, Merging, and Tagging</title> <h2>Background</h2> In a simple and perfect world, the development of a project would proceed linearly, as shown in figure 1. | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | <title>Branching, Forking, Merging, and Tagging</title> <h2>Background</h2> In a simple and perfect world, the development of a project would proceed linearly, as shown in figure 1. <table border=1 cellpadding=10 hspace=10 vspace=10 align="center"> <tr><td align="center"> <img src="branch01.gif" width=280 height=68><br> Figure 1 </td></tr></table> Each circle represents a check-in. For the sake of clarity, the check-ins are given small consecutive numbers. In a real system, of course, the check-in numbers would be 40-character SHA1 hashes since it is not possible to allocate collision-free sequential numbers in a distributed system. But as sequential numbers are easier to read, we will substitute them for the 40-character SHA1 hashes in this document. |
︙ | ︙ | |||
36 37 38 39 40 41 42 | it has no descendants. (We will give a more precise definition later of "leaf.") Alas, reality often interferes with the simple linear development of a project. Suppose two programmers make independent modifications to check-in 2. After both changes are committed, the check-in graph looks like figure 2: | | | | 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 | it has no descendants. (We will give a more precise definition later of "leaf.") Alas, reality often interferes with the simple linear development of a project. Suppose two programmers make independent modifications to check-in 2. After both changes are committed, the check-in graph looks like figure 2: <table border=1 cellpadding=10 hspace=10 vspace=10 align="center"> <tr><td align="center"> <img src="branch02.gif" width=210 height=140><br> Figure 2 </td></tr></table> The graph in figure 2 has two leaves: check-ins 3 and 4. Check-in 2 has two children, check-ins 3 and 4. We call this state a <i>fork</i>. Fossil tries to prevent forks. Suppose two programmers named Alice and Bob are each editing check-in 2 separately. Alice finishes her edits first and commits her changes, resulting in check-in 3. Later, when Bob |
︙ | ︙ | |||
75 76 77 78 79 80 81 | graphs with a single leaf. To resolve this situation, Alice can use the fossil <b>merge</b> command to merge in Bob's changes in her local copy of check-in 3. Then she can commit the results as check-in 5. This results in a DAG as shown in figure 3. | | | | 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 | graphs with a single leaf. To resolve this situation, Alice can use the fossil <b>merge</b> command to merge in Bob's changes in her local copy of check-in 3. Then she can commit the results as check-in 5. This results in a DAG as shown in figure 3. <table border=1 cellpadding=10 hspace=10 vspace=10 align="center"> <tr><td align="center"> <img src="branch03.gif" width=282 height=152><br> Figure 3 </td></tr></table> Check-in 5 is a child of check-in 3 because it was created by editing check-in 3. But check-in 5 also inherits the changes from check-in 4 by virtue of the merge. So we say that check-in 5 is a <i>merge child</i> of check-in 4 and that it is a <i>direct child</i> of check-in 3. The graph is now back to a single leaf (check-in 5). |
︙ | ︙ | |||
122 123 124 125 126 127 128 | development and another leaf that is the latest version that has been tested. When multiple leaves are desirable, we call this <i>branching</i> instead of <i>forking</i>. Figure 4 shows an example of a project where there are two branches, one for development work and another for testing. | | | | 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 | development and another leaf that is the latest version that has been tested. When multiple leaves are desirable, we call this <i>branching</i> instead of <i>forking</i>. Figure 4 shows an example of a project where there are two branches, one for development work and another for testing. <table border=1 cellpadding=10 hspace=10 vspace=10 align="center"> <tr><td align="center"> <img src="branch04.gif" width=426 height=123><br> Figure 4 </td></tr></table> The hypothetical scenario of figure 4 is this: The project starts and progresses to a point where (at check-in 2) it is ready to enter testing for its first release. In a real project, of course, there might be hundreds or thousands of check-ins before a project reaches this point, but for simplicity of presentation we will say that the project is ready after check-in 2. |
︙ | ︙ | |||
162 163 164 165 166 167 168 | <a name="tags"></a> <h2>Tags And Properties</h2> Tags and properties are used in fossil to help express the intent, and thus to distinguish between forks and branches. Figure 5 shows the same scenario as figure 4 but with tags and properties added: | | | | 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 | <a name="tags"></a> <h2>Tags And Properties</h2> Tags and properties are used in fossil to help express the intent, and thus to distinguish between forks and branches. Figure 5 shows the same scenario as figure 4 but with tags and properties added: <table border=1 cellpadding=10 hspace=10 vspace=10 align="center"> <tr><td align="center"> <img src="branch05.gif" width=485 height=177><br> Figure 5 </td></tr></table> A <i>tag</i> is a name that is attached to a check-in. A <i>property</i> is a name/value pair. Internally, fossil implements tags as properties with a NULL value. So, tags and properties really are much the same thing, and henceforth we will use the word "tag" to mean either a tag or a property. |
︙ | ︙ |
Changes to www/bugtheory.wiki.
︙ | ︙ | |||
25 26 27 28 29 30 31 | be permitted to create tickets. Recall that a fossil repository consists of an unordered collection of <i>artifacts</i>. (See the <a href="fileformat.wiki">file format document</a> for details.) Some artifacts have a special format, and among those are <a href="fileformat.wiki#tktchng">Ticket Change Artifacts</a>. | | | 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 | be permitted to create tickets. Recall that a fossil repository consists of an unordered collection of <i>artifacts</i>. (See the <a href="fileformat.wiki">file format document</a> for details.) Some artifacts have a special format, and among those are <a href="fileformat.wiki#tktchng">Ticket Change Artifacts</a>. One or more ticket change artifacts are associated with each ticket. A ticket is created by a ticket change artifact. Each subsequent modification of the ticket is a separate artifact. The "push", "pull", and "sync" algorithms share ticket change artifacts between repositories in the same way as every other artifact. In fact, the sync algorithm has no knowledge of the meaning of the artifacts it is syncing. As far as the sync algorithm is concerned, all artifacts are |
︙ | ︙ | |||
110 111 112 113 114 115 116 | to repopulate the table using the new column names. Note that the TICKET table schema and content is part of the local state of a repository and is not shared with other repositories during a sync, push, or pull. Each repository also defines scripts used to generate web pages for creating new tickets, viewing existing tickets, and modifying an existing ticket. These scripts consist of HTML with an embedded | | | 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 | to repopulate the table using the new column names. Note that the TICKET table schema and content is part of the local state of a repository and is not shared with other repositories during a sync, push, or pull. Each repository also defines scripts used to generate web pages for creating new tickets, viewing existing tickets, and modifying an existing ticket. These scripts consist of HTML with an embedded scripts written a Tcl-like language called "TH1". Every new fossil repository is created with default scripts. Paul Ruizendaal has written documentation on the TH1 language that is available at [http://www.sqliteconcepts.org/THManual.pdf]. Administrators wishing to customize their ticket entry, viewing, and editing screens should modify the default scripts to suit their needs. These screen generator scripts are part of the local state of a repository and are not shared with other repositories during a sync, push, or pull. <i>To be continued...</i> |
Changes to www/build.wiki.
︙ | ︙ | |||
139 140 141 142 143 144 145 | file "<b>win\buildmsvc.bat</b>" may be used and it will attempt to detect and use the latest installed version of MSVC.<br><br>To enable the optional <a href="https://www.openssl.org/">OpenSSL</a> support, first <a href="https://www.openssl.org/source/">download the official source code for OpenSSL</a> and extract it to an appropriately named "<b>openssl-X.Y.ZA</b>" subdirectory within the local [/tree?ci=trunk&name=compat | compat] directory (e.g. | | | 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 | file "<b>win\buildmsvc.bat</b>" may be used and it will attempt to detect and use the latest installed version of MSVC.<br><br>To enable the optional <a href="https://www.openssl.org/">OpenSSL</a> support, first <a href="https://www.openssl.org/source/">download the official source code for OpenSSL</a> and extract it to an appropriately named "<b>openssl-X.Y.ZA</b>" subdirectory within the local [/tree?ci=trunk&name=compat | compat] directory (e.g. "<b>compat/openssl-1.0.2j</b>"), then make sure that some recent <a href="http://www.perl.org/">Perl</a> binaries are installed locally, and finally run one of the following commands: <blockquote><pre> nmake /f Makefile.msc FOSSIL_ENABLE_SSL=1 FOSSIL_BUILD_SSL=1 PERLDIR=C:\full\path\to\Perl\bin </pre></blockquote> <blockquote><pre> buildmsvc.bat FOSSIL_ENABLE_SSL=1 FOSSIL_BUILD_SSL=1 PERLDIR=C:\full\path\to\Perl\bin |
︙ | ︙ |
Changes to www/changes.wiki.
1 2 | <title>Change Log</title> | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > | < | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 | <title>Change Log</title> <a name='v1_37'></a> <h2>Changes for Version 1.37 (2017-XX-YY)</h2> * Added support for the ms=EXACT|LIKE|GLOB|REGEXP query parameter on the [/help?cmd=/timeline|/timeline] webpage. * Fix a C99-ism that prevents the 1.36 release from building with MSVC. * Fix [/help?cmd=ticket|ticket set] when using the "+" prefix with fields from the "ticketchng" table. * Enhance the "brlist" page to make use of branch colors. * Remove the "fusefs" command from builds that do not have the underlying support enabled. * Fixes for incremental git import/export. * Minor security enhancements to [./encryptedrepos.wiki|encrypted repositories]. * TH1 enhancements: <ul><li>Add <nowiki>[unversioned content]</nowiki> command.</li> <li>Add <nowiki>[unversioned list]</nowiki> command.</li> <li>Add project_description variable.</li> </ul> <a name='v1_36'></a> <h2>Changes for Version 1.36 (2016-10-24)</h2> * Add support for [./unvers.wiki|unversioned content], the [/help?cmd=unversioned|fossil unversioned] command and the [/help?cmd=/uv|/uv] and [/uvlist] web pages. * The [/uv/download.html|download page] is moved into [./unvers.wiki|unversioned content] so that the self-hosting Fossil websites no longer uses any external content. * Added the "Search" button to the graphical diff generated by the --tk option on the [/help?cmd=diff|diff] command. * Added the "--checkin VERSION" option to the [/help?cmd=diff|diff] command. * Various performance enhancements to the [/help?cmd=diff|diff] command. * Update internal Unicode character tables, used in regular expression handling, from version 8.0 to 9.0. * Update the built-in SQLite to version 3.15. Fossil now requires the SQLITE_DBCONFIG_MAINDBNAME interface of SQLite which is only available in SQLite version 3.15 and later and so Fossil will not work with earlier SQLite versions. * Fix [https://www.mail-archive.com/fossil-users@lists.fossil-scm.org/msg23618.html|multi-line timeline bug] * Enhance the [/help?cmd=purge|fossil purge] command. * New command [/help?cmd=shell|fossil shell]. * SQL parameters whose names are all lower-case in Ticket Report SQL queries are filled in using HTTP query parameter values. * Added support for [./childprojects.wiki|child projects] that are able to pull from their parent but not push. * Added the -nocomplain option to the TH1 "query" command. * Added support for the chng=GLOBLIST query parameter on the [/help?cmd=/timeline|/timeline] webpage. <a name='v1_35'></a> <h2>Changes for Version 1.35 (2016-06-14)</h2> * Enable symlinks by default on all non-Windows platforms. * Enhance the [/md_rules|Markdown formatting] so that hyperlinks that begin with "/" are relative to the root of the Fossil repository. * Rework the [/help?cmd=/setup_ulist|/setup_list page] (the User List page) to display all users in a click-to-sort table. * Fix backslash-octal escape on filenames while importing from git * When markdown documents begin with <h1> HTML elements, use that header at the document title. * Added the [/help?cmd=/bigbloblist|/bigbloblist page]. * Enhance the [/help?cmd=/finfo|/finfo page] so that when it is showing the ancestors of a particular file version, it only shows direct ancestors and omits changes on branches, thus making it show the same set |
︙ | ︙ | |||
27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 | <li>Add tcl_platform(engine) and tcl_platform(platform) array elements.</li> </ul> * Get autosetup working with MinGW. * Fix autosetup detection of zlib in the source tree. * Added autosetup detection of OpenSSL when it may be present under the "compat" subdirectory of the source tree. * Option --baseurl now works on Windows. <h2>Changes for Version 1.34 (2015-11-02)</h2> * Make the [/help?cmd=clean|fossil clean] command undoable for files less than 10MiB. * Update internal Unicode character tables, used in regular expression handling, from version 7.0 to 8.0. * Add the new [/help?cmd=amend|amend] command which is used to modify | > > > > > > > > > > | 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 | <li>Add tcl_platform(engine) and tcl_platform(platform) array elements.</li> </ul> * Get autosetup working with MinGW. * Fix autosetup detection of zlib in the source tree. * Added autosetup detection of OpenSSL when it may be present under the "compat" subdirectory of the source tree. * Added the [/help?cmd=reparent|fossil reparent] command * Added --include and --exclude options to [/help?cmd=tarball|fossil tarball] and [/help?cmd=zip|fossil zip] and the in= and ex= query parameters to the [/help?cmd=/tarball|/tarball] and [/help?cmd=/zip|/zip] web pages. * Add support for [./encryptedrepos.wiki|encrypted Fossil repositories]. * If the FOSSIL_PWREADER environment variable is set, then use the program it names in place of getpass() to read passwords and passphrases * Option --baseurl now works on Windows. * Numerious documentation improvements. * Update the built-in SQLite to version 3.13.0. <a name='v1_34'></a> <h2>Changes for Version 1.34 (2015-11-02)</h2> * Make the [/help?cmd=clean|fossil clean] command undoable for files less than 10MiB. * Update internal Unicode character tables, used in regular expression handling, from version 7.0 to 8.0. * Add the new [/help?cmd=amend|amend] command which is used to modify |
︙ | ︙ | |||
64 65 66 67 68 69 70 71 72 73 74 75 76 77 | * Fix --hard option to [/help?cmd=mv|fossil mv] and [/help?cmd=rm|fossil rm] to enable them to work properly with certain relative paths. * Change the mimetype for ".n" and ".man" files to text/plain. * Display improvements in the [/help?cmd=bisect|fossil bisect chart] command. * Updated the built-in SQLite to version 3.9.1 and activated JSON1 and FTS5 support (both currently unused within Fossil). <h2>Changes for Version 1.33 (2015-05-23)</h2> * Improved fork detection on [/help?cmd=update|fossil update], [/help?cmd=status|fossil status] and related commands. * Change the default skin to what used to be called "San Francisco Modern". * Add the [/repo-tabsize] web page * Add [/help?cmd=import|fossil import --svn], for importing a subversion repository into fossil which was exported using "svnadmin dump". | > | 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 | * Fix --hard option to [/help?cmd=mv|fossil mv] and [/help?cmd=rm|fossil rm] to enable them to work properly with certain relative paths. * Change the mimetype for ".n" and ".man" files to text/plain. * Display improvements in the [/help?cmd=bisect|fossil bisect chart] command. * Updated the built-in SQLite to version 3.9.1 and activated JSON1 and FTS5 support (both currently unused within Fossil). <a name='v1_33'></a> <h2>Changes for Version 1.33 (2015-05-23)</h2> * Improved fork detection on [/help?cmd=update|fossil update], [/help?cmd=status|fossil status] and related commands. * Change the default skin to what used to be called "San Francisco Modern". * Add the [/repo-tabsize] web page * Add [/help?cmd=import|fossil import --svn], for importing a subversion repository into fossil which was exported using "svnadmin dump". |
︙ | ︙ | |||
113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 | * Permit filtering weekday and file [/help?cmd=/reports|reports] by user. Also ensure the user parameter is preserved when changing types. Add a field for direct entry of the user name to each applicable report. * Create parent directories of [/help?cmd=settings|empty-dirs] if they don't already exist. * Inhibit timeline links to wiki pages that have been deleted. <h2>Changes for Version 1.32 (2015-03-14)</h2> * When creating a new repository using [/help?cmd=init|fossil init], ensure that the new repository is fully compatible with historical versions of Fossil by having a valid manifest as RID 1. * Anti-aliased rendering of arrowheads on timeline graphs. * Added vi/less-style key bindings to the --tk diff GUI. * Documentation updates to fix spellings and changes all "checkins" to "check-ins". * Add the --repolist option to server commands such as [/help?cmd=server|fossil server] or [/help?cmd=http|fossil http]. * Added the "Xekri" skin. * Enhance the "ln=" query parameter on artifact displays to accept multiple ranges, separate by spaces (or "+" when URL-encoded). * Added [/help?cmd=forget|fossil forget] as an alias for [/help?cmd=rm|fossil rm]. <h2>Changes For Version 1.31 (2015-02-23)</h2> * Change the auxiliary schema by adding columns MLINK.ISAUX and MLINK.PMID columns to the schema, to support better drawing of file change graphs. A [/help?cmd=rebuild|fossil rebuild] is recommended but is not required. so that the new graph drawing logic can work effectively. * Added [/search|search] over Check-in comments, Documents, Tickets and Wiki. Disabled by default. The search can be either a full-scan or it | > > | 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 | * Permit filtering weekday and file [/help?cmd=/reports|reports] by user. Also ensure the user parameter is preserved when changing types. Add a field for direct entry of the user name to each applicable report. * Create parent directories of [/help?cmd=settings|empty-dirs] if they don't already exist. * Inhibit timeline links to wiki pages that have been deleted. <a name='v1_33'></a> <h2>Changes for Version 1.32 (2015-03-14)</h2> * When creating a new repository using [/help?cmd=init|fossil init], ensure that the new repository is fully compatible with historical versions of Fossil by having a valid manifest as RID 1. * Anti-aliased rendering of arrowheads on timeline graphs. * Added vi/less-style key bindings to the --tk diff GUI. * Documentation updates to fix spellings and changes all "checkins" to "check-ins". * Add the --repolist option to server commands such as [/help?cmd=server|fossil server] or [/help?cmd=http|fossil http]. * Added the "Xekri" skin. * Enhance the "ln=" query parameter on artifact displays to accept multiple ranges, separate by spaces (or "+" when URL-encoded). * Added [/help?cmd=forget|fossil forget] as an alias for [/help?cmd=rm|fossil rm]. <a name='v1_31'></a> <h2>Changes For Version 1.31 (2015-02-23)</h2> * Change the auxiliary schema by adding columns MLINK.ISAUX and MLINK.PMID columns to the schema, to support better drawing of file change graphs. A [/help?cmd=rebuild|fossil rebuild] is recommended but is not required. so that the new graph drawing logic can work effectively. * Added [/search|search] over Check-in comments, Documents, Tickets and Wiki. Disabled by default. The search can be either a full-scan or it |
︙ | ︙ | |||
180 181 182 183 184 185 186 187 188 189 190 191 192 193 | * Added the [/mimetype_list] page. * Added the [/hash-collisions] page. * Allow the user of Common Table Expressions in the SQL that defaults ticket reports. * Break out the components (css, footer, and header) for the various built-in skins into separate files in the source tree. <h2>Changes For Version 1.30 (2015-01-19)</h2> * Added the [/help?cmd=bundle|fossil bundle] command. * Added the [/help?cmd=purge|fossil purge] command. * Added the [/help?cmd=publish|fossil publish] command. * Added the [/help?cmd=unpublished|fossil unpublished] command. * Enhance the [/tree] webpage to show the age of each file with the option to sort by age. | > | 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 | * Added the [/mimetype_list] page. * Added the [/hash-collisions] page. * Allow the user of Common Table Expressions in the SQL that defaults ticket reports. * Break out the components (css, footer, and header) for the various built-in skins into separate files in the source tree. <a name='v1_30'></a> <h2>Changes For Version 1.30 (2015-01-19)</h2> * Added the [/help?cmd=bundle|fossil bundle] command. * Added the [/help?cmd=purge|fossil purge] command. * Added the [/help?cmd=publish|fossil publish] command. * Added the [/help?cmd=unpublished|fossil unpublished] command. * Enhance the [/tree] webpage to show the age of each file with the option to sort by age. |
︙ | ︙ | |||
250 251 252 253 254 255 256 257 258 259 260 261 262 263 | diff option in a separate file for easier editing. * (Internal:) Implement a system of compile-time checks to help ensure the correctness of printf-style formatting strings. * Fix CVE-2014-3566, also known as the POODLE SSL 3.0 vulnerability. * Numerous documentation fixes and improvements. * Other obscure and minor bug fixes - see the timeline for details. <h2>Changes For Version 1.29 (2014-06-12)</h2> * Add the ability to display content, diffs and annotations for UTF16 text files in the web interface. * Add the "SaveAs..." and "Invert" buttons to the graphical diff display that results from using the --tk option with the [/help/diff | fossil diff] command. * The [/reports] page now requires Read ("o") permissions. The "byweek" | > | 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 | diff option in a separate file for easier editing. * (Internal:) Implement a system of compile-time checks to help ensure the correctness of printf-style formatting strings. * Fix CVE-2014-3566, also known as the POODLE SSL 3.0 vulnerability. * Numerous documentation fixes and improvements. * Other obscure and minor bug fixes - see the timeline for details. <a name='v1_29'></a> <h2>Changes For Version 1.29 (2014-06-12)</h2> * Add the ability to display content, diffs and annotations for UTF16 text files in the web interface. * Add the "SaveAs..." and "Invert" buttons to the graphical diff display that results from using the --tk option with the [/help/diff | fossil diff] command. * The [/reports] page now requires Read ("o") permissions. The "byweek" |
︙ | ︙ |
Changes to www/checkin_names.wiki.
︙ | ︙ | |||
53 54 55 56 57 58 59 | <blockquote><pre> fossil info e5a734a19a9826973e1d073b49dc2a16aa2308f9 </pre></blockquote> The full 40-character SHA1 hash is unwieldy to remember and type, though, so Fossil also accepts a unique prefix of the hash, using any combination of upper and lower case letters, as long as the prefix is at least 4 | | | | 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 | <blockquote><pre> fossil info e5a734a19a9826973e1d073b49dc2a16aa2308f9 </pre></blockquote> The full 40-character SHA1 hash is unwieldy to remember and type, though, so Fossil also accepts a unique prefix of the hash, using any combination of upper and lower case letters, as long as the prefix is at least 4 characters long. Hence the following commands all accomplish the same thing as the above: <blockquote><pre> fossil info e5a734a19a9 fossil info E5a734A fossil info e5a7 </blockquote> Many web-interface screens identify check-ins by 10- or 16-character prefix of canonical name. <h2>Tags And Branch Names</h2> Using a tag or branch name where a check-in name is expected causes Fossil to choose the most recent check-in with that tag or branch name. So, for example, as of this writing the most recent check-in that |
︙ | ︙ | |||
110 111 112 113 114 115 116 | name? In such cases, you can prefix the tag name with "tag:". For example: <blockquote><tt> fossil info tag:deed2 </tt></blockquote> | | | 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 | name? In such cases, you can prefix the tag name with "tag:". For example: <blockquote><tt> fossil info tag:deed2 </tt></blockquote> The "tag:deed2" name will refer to the most recent check-in tagged with "deed2" not to the check-in whose canonical name begins with "deed2". <h2>Whole Branches</h2> Usually when a branch name is specified, it means the latest check-in on that branch. But for some commands (ex: [/help/purge|purge]) a branch name |
︙ | ︙ | |||
145 146 147 148 149 150 151 | check-in that occurs no later than the timestamp given: * <i>YYYY-MM-DD</i> * <i>YYYY-MM-DD HH:MM</i> * <i>YYYY-MM-DD HH:MM:SS</i> * <i>YYYY-MM-DD HH:MM:SS.SSS</i> | | | | | | 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 | check-in that occurs no later than the timestamp given: * <i>YYYY-MM-DD</i> * <i>YYYY-MM-DD HH:MM</i> * <i>YYYY-MM-DD HH:MM:SS</i> * <i>YYYY-MM-DD HH:MM:SS.SSS</i> The space between the day and the year can optionally be replaced by an uppercase <b>T</b> and the entire timestamp can optionally be followed by "<b>z</b>" or "<b>Z</b>". In the fourth form with fractional seconds, any number of digits may follow the decimal point, though due to precision limits only the first three digits will be significant. In its default configuration, Fossil interprets and displays all dates in Universal Coordinated Time (UTC). This tends to work the best for distributed projects where participants are scattered around the globe. But there is an option on the Admin/Timeline page of the web-interface to switch to local time. The "<b>Z</b>" suffix on a timestamp check-in name is meaningless if Fossil is in the default mode of using UTC for everything, but if Fossil has been switched to local time mode, then the "<b>Z</b>" suffix means to interpret that particular timestamp using UTC instead of local time. For an example of how timestamps are useful, consider the homepage for the Fossil website itself: <blockquote> http://www.fossil-scm.org/fossil/doc/<b>trunk</b>/www/index.wiki </blockquote> The bold component of that URL is a check-in name. To see what the |
︙ | ︙ | |||
189 190 191 192 193 194 195 | the timestamp. So, for example: <blockquote> fossil update trunk:2010-07-01T14:30 </blockquote> Would cause Fossil to update the working check-out to be the most recent | | | 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 | the timestamp. So, for example: <blockquote> fossil update trunk:2010-07-01T14:30 </blockquote> Would cause Fossil to update the working check-out to be the most recent check-in on the trunk that is not more recent that 14:30 (UTC) on July 1, 2010. <h2>Root Of A Branch</h2> A branch name that begins with the "<tt>root:</tt>" prefix refers to the last check-in in the parent branch prior to the beginning of the branch. Such a label is useful, for example, in computing all diffs for a single |
︙ | ︙ | |||
218 219 220 221 222 223 224 | repository) then a few extra tags apply. The "current" tag means the current check-out. The "next" tag means the youngest child of the current check-out. And the "previous" or "prev" tag means the primary (non-merge) parent of the current check-out. For embedded documentation, the tag "ckout" means the version as present in the local source tree on disk, provided that the web server is started using | | | 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 | repository) then a few extra tags apply. The "current" tag means the current check-out. The "next" tag means the youngest child of the current check-out. And the "previous" or "prev" tag means the primary (non-merge) parent of the current check-out. For embedded documentation, the tag "ckout" means the version as present in the local source tree on disk, provided that the web server is started using "fossil ui" or "fossil server" from within the source tree. This tag can be used to preview local changes to documentation before committing them. It does not apply to CLI commands. <h2>Additional Examples</h2> To view the changes in the most recent check-in prior to the version currently checked out: |
︙ | ︙ |
Added www/childprojects.wiki.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 | <title>Child Projects</title> <h2>Background</h2> The default behavior of Fossil is to share everything (all check-ins, tickets, wiki, etc) between all clients and all servers. Such a policy helps to promote a cohesive design for a cathedral-style project run by a small cliche of developers - the sort of project for which Fossil was designed. But sometimes it is desirable to branch off a side project that does not sync back to the master but does continue to track changes in the master. For example, the master project might be an open-source project like [https://www.sqlite.org/|SQLite] and a team might want to do a proprietary closed-source enhancement to that master project in a separate repository. All changes in the master project should flow forward into the derived project, but care must be taken to prevent proprietary content from the derived project from leaking back into the master. <h2>Child Projects</h2> A scenario such as the above can be accomplished in Fossil by creating a child project. The child project is able to freely pull from the parent, but the parent cannot push or pull from the child nor is the child able to push to the parent. Content flows from parent to child only, and then only at the request of the child. <h2>Creating a Child Project</h2> To create a new child project, first clone the parent. Then make manual SQL changes to the child repository as follows: <blockquote><verbatim> UPDATE config SET name='parent-project-code' WHERE name='project-code'; UPDATE config SET name='parent-project-name' WHERE name='project-name'; INSERT INTO config(name,value) VALUES('project-code',lower(hex(randomblob(20)))); INSERT INTO config(name,value) VALUES('project-name','CHILD-PROJECT-NAME'); </verbatim></blockquote> Modify the CHILD-PROJECT-NAME in the last statement to be the name of the child project, of course. The repository is now a separate project, independent from its parent. Clone the new project to the developers as needed. The child project and the parent project will not normally be able to sync with one another, since they are now separate projects with distinct project codes. However, if the "--from-parent-project" command-line option is provided to the "[/help?cmd=pull|fossil pull]" command in the child, and the URL of parent repository is also provided on the command-line, then updates to the parent project that occurred after the child was created will be added to the child repository. Thus, by periodically doing a pull --from-parent-project, the child project is able to stay up to date with all the latest changes in the parent. |
Deleted www/cmd_.wiki-template.
|
| < < < < < < < < < < < < < < < < < < < < < < |
Changes to www/concepts.wiki.
︙ | ︙ | |||
11 12 13 14 15 16 17 18 19 20 21 22 23 24 | There are many such systems in use today. Fossil strives to distinguish itself from the others by being extremely simple to setup and operate. This document is intended as a quick introduction to the concepts behind Fossil. <h2>2.0 Composition Of A Project</h2> <img src="concept1.gif" align="right" hspace="10"> A software project normally consists of a "source tree". A source tree is a hierarchy of files that are used to generate the end product. The source tree changes over time as the software grows and expands and as features are added and bugs | > > > > > | 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | There are many such systems in use today. Fossil strives to distinguish itself from the others by being extremely simple to setup and operate. This document is intended as a quick introduction to the concepts behind Fossil. See also: * [./whyusefossil.wiki#definitions|Definitions] * [./quickstart.wiki|Quick start guide] <h2>2.0 Composition Of A Project</h2> <img src="concept1.gif" align="right" hspace="10"> A software project normally consists of a "source tree". A source tree is a hierarchy of files that are used to generate the end product. The source tree changes over time as the software grows and expands and as features are added and bugs |
︙ | ︙ | |||
124 125 126 127 128 129 130 | a software project. <h3>2.2 Manifests</h3> Associated with every check-in is a special file called the [./fileformat.wiki#manifest| "manifest"]. The manifest is a listing of all other files in | | | | 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 | a software project. <h3>2.2 Manifests</h3> Associated with every check-in is a special file called the [./fileformat.wiki#manifest| "manifest"]. The manifest is a listing of all other files in that source tree. The manifest contains the (complete) artifact ID of the file and the name of the file as it appears on disk, and thus serves as a mapping from artifact ID to disk name. The artifact ID of the manifest is the identifier for the entire check-in. When you look at a "timeline" of changes in Fossil, the ID associated with each check-in or commit is really just the artifact ID of the manifest for that check-in. <p>The manifest file is not normally a real file on disk. Instead, the manifest is computed in memory by Fossil whenever it needs it. However, the "fossil setting manifest on" command will cause the manifest file to be materialized to disk, if desired. Both Fossil itself, and SQLite cause the manifest file to be materialized to disk so that the makefiles for these project can read the manifest and embed version information in generated binaries. <p>Fossil automatically generates a manifest whenever you "commit" a new check-in. So this is not something that you, the developer, need to worry with. The format of a manifest is intentionally designed to be simple to parse, so that if you want to read and interpret a manifest, either by hand or with a script, that is easy to do. But you will probably never need to do so.</p> |
︙ | ︙ | |||
193 194 195 196 197 198 199 | CVS, gzip, diff, rsync, Python, Perl, Tcl, Java, apache, PostgreSQL, MySQL, SQLite, patch, or any similar software on your system in order to use Fossil effectively. You will want to have some kind of text editor for entering check-in comments. Fossil will use whatever text editor is identified by your VISUAL environment variable. Fossil will also use GPG to clearsign your manifests if you happen to have it installed, but Fossil will skip that step if GPG missing from your system. | | | | 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 | CVS, gzip, diff, rsync, Python, Perl, Tcl, Java, apache, PostgreSQL, MySQL, SQLite, patch, or any similar software on your system in order to use Fossil effectively. You will want to have some kind of text editor for entering check-in comments. Fossil will use whatever text editor is identified by your VISUAL environment variable. Fossil will also use GPG to clearsign your manifests if you happen to have it installed, but Fossil will skip that step if GPG missing from your system. You can optionally set up Fossil to use external "diff" programs, though Fossil has an excellent built-in "diff" algorithm that works fine for most people. If you happen to have Tcl/Tk installed on your system, Fossil will use it to generate a graphical "diff" display when you use the --tk option to the "diff" command, but this too is entirely optional. To uninstall Fossil, simply delete the executable. To upgrade an older version of Fossil to a newer version, just replace the old executable with the new one. You might need to run "<b>fossil all rebuild</b>" to restructure your repositories after an upgrade. Running "all rebuild" never hurts, so when upgrading it is a good policy to run it even if it is not strictly necessary. To use Fossil, simply type the name of the executable in your shell, followed by one of the various built-in commands and arguments appropriate for that command. For example: |
︙ | ︙ | |||
259 260 261 262 263 264 265 | <h3>4.1 Autosync Workflow</h3> <ol> <li> Establish a local repository using either the <b>new</b> command to start a new project, or the <b>clone</b> command to make a clone | | | | 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 | <h3>4.1 Autosync Workflow</h3> <ol> <li> Establish a local repository using either the <b>new</b> command to start a new project, or the <b>clone</b> command to make a clone of a repository for an existing project. </li> <li> Establish one or more source trees using the <b>open</b> command with the name of the repository file as its argument. </li> <li> The <b>open</b> command in the previous step populates your local source tree with a copy of the latest check-in. Usually this is what you want. In the rare cases where it is not, use the <b>update</b> command to switch to a different check-in. Use the <b>timeline</b> or <b>leaves</b> commands to identify alternative check-ins to switch to. </li> <li> Edit the code. Add new files to the source tree using the <b>add</b> command. Omit files from future check-ins using the <b>rm</b> command. |
︙ | ︙ | |||
295 296 297 298 299 300 301 | tree into your local repository. After your commit completes, Fossil will automatically <b>push</b> your changes back to the server you cloned from or whatever server you most recently synced with. </li> <li> When your coworkers make their own changes, you can merge those changes | | | | 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 | tree into your local repository. After your commit completes, Fossil will automatically <b>push</b> your changes back to the server you cloned from or whatever server you most recently synced with. </li> <li> When your coworkers make their own changes, you can merge those changes into your local local source tree using the <b>update</b> command. In autosync mode, <b>update</b> will first go back to the server you cloned from or with which you most recently synced, and pull down all recent changes into your local repository. Then it will merge recent changes into your local source tree. If you do an <b>update</b> and find that it messes something up in your source tree (perhaps a co-worker checked in incompatible changes) you can use the <b>undo</b> command to back out the changes. </li> <li> Repeat all of the above until you have generated great software. </li> </ol> |
︙ | ︙ |
Changes to www/contribute.wiki.
︙ | ︙ | |||
8 9 10 11 12 13 14 | In order to accept your contributions, we <u>must</u> have a [./copyright-release.pdf | Contributor Agreement (PDF)] (or [./copyright-release.html | as HTML]) on file for you. We require this in order to maintain clear title to the Fossil code and prevent the introduction of code with incompatible licenses or other entanglements that might cause legal problems for Fossil users. Many larger companies | | | | | | 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 | In order to accept your contributions, we <u>must</u> have a [./copyright-release.pdf | Contributor Agreement (PDF)] (or [./copyright-release.html | as HTML]) on file for you. We require this in order to maintain clear title to the Fossil code and prevent the introduction of code with incompatible licenses or other entanglements that might cause legal problems for Fossil users. Many larger companies and other lawyer-rich organizations require this as a precondition to using Fossil. If you do not wish to submit a Contributor Agreement, we would still welcome your suggestions and example code, but we will not use your code directly - we will be forced to re-implement your changes from scratch which might take longer. <h2>2.0 Submitting Patches</h2> Suggested changes or bug fixes can be submitted by creating a patch against the current source tree. Email patches to <a href="mailto:drh@sqlite.org">drh@sqlite.org</a>. Be sure to describe in detail what the patch does and which version of Fossil it is written against. A contributor agreement is not strictly necessary to submit a patch. However, without a contributor agreement on file, your patch will be used for reference only - it will not be applied to the code. This may delay acceptance of your patch. Your patches or changes might not be accepted even if you do have |
︙ | ︙ | |||
51 52 53 54 55 56 57 | Contributors are asked to make all non-trivial changes on a branch. The Fossil Architect (Richard Hipp) will merge changes onto the trunk.</p> Contributors are required to following the [./checkin.wiki | pre-checkin checklist] prior to every check-in to the Fossil self-hosting repository. This checklist is short and succinct | | | | | 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 | Contributors are asked to make all non-trivial changes on a branch. The Fossil Architect (Richard Hipp) will merge changes onto the trunk.</p> Contributors are required to following the [./checkin.wiki | pre-checkin checklist] prior to every check-in to the Fossil self-hosting repository. This checklist is short and succinct and should only require a few seconds to follow. Contributors should print out a copy of the pre-checkin checklist and keep it on a notecard beside their workstations, for quick reference. Contributors should review the [./style.wiki | Coding Style Guidelines] and mimic the coding style used through the rest of the Fossil source code. Your code should blend in. A third-party reader should be unable to distinguish your code from any other code in the source corpus. <h2>4.0 Testing</h2> Fossil has the beginnings of a [../test/release-checklist.wiki | release checklist] but this is an area that needs further work. (Your contributions here are welcomed!) Contributors with check-in privileges are expected to run the release checklist on any major changes they contribute, and if appropriate expand the checklist and/or the automated test scripts to cover their additions. <h2>5.0 See Also</h2> * [./build.wiki | How To Compile And Install Fossil] * [./makefile.wiki | The Fossil Build Process] * [./tech_overview.wiki | A Technical Overview of Fossil] * [./adding_code.wiki | Adding Features To Fossil] |
Changes to www/copyright-release.html.
︙ | ︙ | |||
72 73 74 75 76 77 78 | </ul> </ol> <p>By filling in the following information and signing your name, you agree to be bound by all of the terms set forth in this agreement. Please print clearly.</p> | < | | < | 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 | </ul> </ol> <p>By filling in the following information and signing your name, you agree to be bound by all of the terms set forth in this agreement. Please print clearly.</p> <p><table width="80%" border="1" cellpadding="0" cellspacing="0" align="center"> <tr><td width="20%" valign="top">Your name & email:</td><td width="80%"> <!-- Replace this line with your name and email --> <p> </td></tr> <tr><td valign="top">Company name:<br>(if applicable)</td><td> <!-- Replace this line with your company name --> <p> </td></tr> <tr><td valign="top">Postal address:</td><td> <!-- Replace this line and the next line with your postal address --> <p> </p><p> </p><p> </p> </td></tr> <tr><td valign="top">Signature:</td><td> <p> </td></tr> <tr><td valign="top">Date:</td><td> <p> </td></tr> </table> <p>Send completed forms to: <blockquote> Hipp, Wyrick & Company, Inc.<br> 6200 Maple Cove Lane<br> Charlotte, NC 28269-1086<br> USA </p> |
Changes to www/custom_ticket.wiki.
︙ | ︙ | |||
63 64 65 66 67 68 69 | <td><u>Not publicly visible</u>. Used by developers to contact you with questions.</td> </tr> <th1>enable_output 1</th1> </pre> This bit of code will get rid of the "email" field entry for logged-in users. Since we know the user's information, we don't have to ask for it. NOTE: it | | | | 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 | <td><u>Not publicly visible</u>. Used by developers to contact you with questions.</td> </tr> <th1>enable_output 1</th1> </pre> This bit of code will get rid of the "email" field entry for logged-in users. Since we know the user's information, we don't have to ask for it. NOTE: it might be good to automatically scoop up the user's email and put it here. </p> </blockquote> <h2>Modify the 'view ticket' page</h2><blockquote> <p> Look for the text "Contact:" (about halfway through). Then insert these lines after the closing tr tag and before the "enable_output" line: <pre> <tr> <td align="right">Assigned to:</td><td bgcolor="#d0d0d0"> $<assigned_to> </td> <td align="right">Opened by:</td><td bgcolor="#d0d0d0"> $<opened_by> </td> </pre> This will add a row which displays these two fields, in the event the user has "edit" capability. </p> </blockquote> <h2>Modify the 'edit ticket' page</h2><blockquote> <p> Before the "Severity:" line, add this: <pre> |
︙ | ︙ |
Changes to www/customskin.md.
︙ | ︙ | |||
142 143 144 145 146 147 148 149 150 151 152 153 154 155 | Before expanding the TH1 within the header and footer, Fossil first initializes a number of TH1 variables to values that depend on respository settings and the specific page being generated. * **project_name** - The project_name variable is filled with the name of the project as configured under the Admin/Configuration menu. * **title** - The title variable holds the title of the page being generated. The title variable is special in that it is deleted after the header script runs and before the footer script. This is necessary to avoid a conflict with a variable by the same name used | > > > > | 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 | Before expanding the TH1 within the header and footer, Fossil first initializes a number of TH1 variables to values that depend on respository settings and the specific page being generated. * **project_name** - The project_name variable is filled with the name of the project as configured under the Admin/Configuration menu. * **project_description** - The project_description variable is filled with the description of the project as configured under the Admin/Configuration menu. * **title** - The title variable holds the title of the page being generated. The title variable is special in that it is deleted after the header script runs and before the footer script. This is necessary to avoid a conflict with a variable by the same name used |
︙ | ︙ |
Changes to www/delta_encoder_algorithm.wiki.
︙ | ︙ | |||
151 152 153 154 155 156 157 | needed more bytes to encode the range than there were bytes in the range, then no instructions are emitted and the window is moved one byte forward. The "base" is left unchanged in that case.</p> <p>The processing loop stops at one of two conditions: <ol> <li>The encoder decided to move the window forward, but the end of the | | | 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 | needed more bytes to encode the range than there were bytes in the range, then no instructions are emitted and the window is moved one byte forward. The "base" is left unchanged in that case.</p> <p>The processing loop stops at one of two conditions: <ol> <li>The encoder decided to move the window forward, but the end of the window reached the end of the "target". </li> <li>After the emission of instructions the new "base" location is within NHASH bytes of end of the "target", i.e. there are no more than at most NHASH bytes left. </li> </ol> </p> |
︙ | ︙ |
Changes to www/delta_format.wiki.
︙ | ︙ | |||
159 160 161 162 163 164 165 | <tr><td> </td> <td>~E@Y0, </td><td>Copy </td><td> 4046 @ 2176 </td></tr> <tr><td>Trailer</td><td>2zMM3E </td><td>Checksum</td><td> -1101438770 </td></tr> </table> <p>The unified diff behind the above delta is</p> <table border=1><tr><td><pre> | | | | | | | | | | | 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 | <tr><td> </td> <td>~E@Y0, </td><td>Copy </td><td> 4046 @ 2176 </td></tr> <tr><td>Trailer</td><td>2zMM3E </td><td>Checksum</td><td> -1101438770 </td></tr> </table> <p>The unified diff behind the above delta is</p> <table border=1><tr><td><pre> bluepeak:(761) ~/Projects/Tcl/Fossil/Devel/devel > diff -u ../DELTA/old ../DELTA/new --- ../DELTA/old 2007-08-23 21:14:40.000000000 -0700 +++ ../DELTA/new 2007-08-23 21:14:33.000000000 -0700 @@ -5,7 +5,7 @@ * If the server does not have write permission on the database file, or on the directory containing the database file (and - it is thus unable to update database because it cannot create + it is thus unable to update the database because it cannot create a rollback journal) then it currently fails silently on a push. It needs to return a helpful error. @@ -27,8 +27,8 @@ * Additional information displayed for the "vinfo" page: + All leaves of this version that are not included in the - descendant list. With date, user, comment, and hyperlink. - Leaves in the descendant table should be marked as such. + descendant list. With date, user, comment, and hyperlink. + Leaves in the descendant table should be marked as such. See the compute_leaves() function to see how to find all leaves. + Add file diff links to the file change list. @@ -37,7 +37,7 @@ * The /xfer handler (for push, pull, and clone) does not do delta compression. This results in excess bandwidth usage. - There are some code in xfer.c that are sketches of ideas on + There are some pieces in xfer.c that are sketches of ideas on how to do delta compression, but nothing has been implemented. * Enhancements to the diff and tkdiff commands in the cli. @@ -45,7 +45,7 @@ single file. Allow diffs against any two arbitrary versions, not just diffs against the current check-out. Allow configuration options to replace tkdiff with some other - visual differ of the users choice. + visual differ of the users choice. Example: eskil. * Ticketing interface (expand this bullet) </pre></td></tr></table> <a name="notes"></a><h2>Notes</h2> |
︙ | ︙ |
Changes to www/embeddeddoc.wiki.
︙ | ︙ | |||
40 41 42 43 44 45 46 | For example, the <i><baseurl></i> for the fossil project itself is either <b>http://www.fossil-scm.org/fossil</b> or <b>http://www.hwaci.com/cgi-bin/fossil</b>. If you launch the web server using the "<b>fossil server</b>" command line, then the <i><baseurl></i> is usually <b>http://localhost:8080/</b>. | | | | | | | | | | 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 | For example, the <i><baseurl></i> for the fossil project itself is either <b>http://www.fossil-scm.org/fossil</b> or <b>http://www.hwaci.com/cgi-bin/fossil</b>. If you launch the web server using the "<b>fossil server</b>" command line, then the <i><baseurl></i> is usually <b>http://localhost:8080/</b>. The <i><version></i> is any unique prefix of the check-in ID for the check-in containing the documentation you want to access. Or <i><version></i> can be the name of a [./branching.wiki | branch] in order to show the documentation for the latest version of that branch. Or <i><version></i> can be one of the keywords "<b>tip</b>" or "<b>ckout</b>". The "<b>tip</b>" keyword means to use the most recent check-in. This is useful if you want to see the very latest version of the documentation. The "<b>ckout</b>" keywords means to pull the documentation file from the local source tree on disk, not from the any check-in. The "<b>ckout</b>" keyword normally only works when you start your server using the "<b>fossil server</b>" or "<b>fossil ui</b>" command line and is intended to show what the documentation you are currently editing looks like before you check it in. Finally, the <i><filename></i> element of the URL is the pathname of the documentation file relative to the root of the source tree. The mimetype (and thus the rendering) of documentation files is determined by the file suffix. Fossil currently understands [/mimetype_list|many different file suffixes], including all the popular ones such as ".css", ".gif", ".htm", ".html", ".jpg", ".jpeg", ".png", and ".txt". Documentation files whose names end in ".wiki" use the [/wiki_rules | fossil wiki markup] - a safe subset of HTML together with some wiki rules for paragraph breaks, lists, and hyperlinks. Documentation files ending in ".md" or ".markdown" use the [/md_rules | Markdown markup langauge]. Documentation files ending in ".txt" are plain text. Wiki, markdown, and plain text documentation files are rendered with the standard fossil header and footer added. Most other mimetypes are delivered directly to the requesting web browser without interpretation, additions, or changes. Files with the mimetype "text/html" (the .html or .htm suffix) are usually rendered directly to the browser without interpretation. However, if the file begins with a <div> element like this: <b><div class='fossil-doc' data-title='<i>Title Text</i>'></b> Then the standard Fossil header and footer are added to the document prior to being displayed. The "class='fossil-doc'" attribute is required for this to occur. The "data-title='...'" attribute is |
︙ | ︙ | |||
115 116 117 118 119 120 121 | CGI mode. The "index.html" CGI script looks like this: <blockquote><pre> #!/usr/bin/fossil repository: /fossil/fossil.fossil </pre></blockquote> | | | | 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 | CGI mode. The "index.html" CGI script looks like this: <blockquote><pre> #!/usr/bin/fossil repository: /fossil/fossil.fossil </pre></blockquote> This is one of four ways to set up a <a href="./server.wiki">fossil web server</a>. The "<b>/trunk/</b>" part of the URL tells fossil to use the documentation files from the most recent trunk check-in. If you wanted to see an historical version of this document, you could substitute the name of a check-in for "<b>/trunk/</b>". For example, to see the version of this document associated with check-in [9be1b00392], simply replace the "<b>/trunk/</b>" with "<b>/9be1b00392/</b>". You can also substitute the symbolic name for a particular version or branch. For example, you might replace "<b>/trunk/</b>" with "<b>/experimental/</b>" to get the latest version of this document in the "experimental" branch. The symbolic name can also be a date and time string in any of the following formats:</p> <ul> <li> <i>YYYY-MM-DD</i> <li> <i>YYYY-MM-DD</i><b>T</b><i>HH:MM</i> <li> <i>YYYY-MM-DD</i><b>T</b><i>HH:MM:SS</i> </ul> When the symbolic name is a date and time, fossil shows the version of the document that was most recently checked in as of the date and time specified. So, for example, to see what the fossil website looked like at the beginning of 2010, enter: <blockquote> <a href="http://www.fossil-scm.org/index.html/doc/2010-01-01/www/index.wiki"> http://www.fossil-scm.org/index.html/doc/<b>2010-01-01</b>/www/index.wiki |
︙ | ︙ |
Changes to www/encryptedrepos.wiki.
1 2 3 4 5 6 7 | <title>How To Use Encrypted Repositories</title> <h2>Introduction</h2><blockquote> Fossil can be compiled so that it works with encrypted repositories using the [https://www.sqlite.org/see/doc/trunk/www/readme.wiki|SQLite Encryption Extension]. This technical note explains the process. </blockquote> <h2>Building An Encryption-Enabled Fossil</h2><blockquote> | | | > > | > > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 | <title>How To Use Encrypted Repositories</title> <h2>Introduction</h2><blockquote> Fossil can be compiled so that it works with encrypted repositories using the [https://www.sqlite.org/see/doc/trunk/www/readme.wiki|SQLite Encryption Extension]. This technical note explains the process. </blockquote> <h2>Building An Encryption-Enabled Fossil</h2><blockquote> The SQLite Encryption Extension (SEE) is proprietary software and requires [http://www.hwaci.com/cgi-bin/see-step1|purchasing a license]. <p> Assuming you have an SEE license, the first step of compiling Fossil to use SEE is to create an SEE-enabled version of the SQLite database source code. This alternative SQLite database source file should be called "sqlite3-see.c" and should be placed in the src/ subfolder of the Fossil sources, right beside the public-domain "sqlite3.c" source file. Also make a copy of the SEE-enabled "shell.c" file, renamed as "shell-see.c", and place it in the src/ subfolder beside the original "shell.c". <p> Add the --with-see command-line option to the configuration script to enable the use of SEE on unix-like systems. <blockquote><pre> ./configure --with-see; make </pre></blockquote> <p>To build for Windows using MSVC, add the "USE_SEE=1" argument to the "nmake" command line. <blockquote><pre> nmake -f makefile.msc USE_SEE=1 </pre></blockquote> </blockquote> <h2>Using Encrypted Repositories</h2><blockquote> Any Fossil repositories whose filename ends with ".efossil" is taken to be an encrypted repository. Fossil will prompt for the encryption password and attempt to open the repository database using that password. <p> Every invocation of fossil on an encrypted repository requires retyping the encryption password. To avoid excess password typing, consider using the "fossil shell" command which prompts for the password just once, then reuses it for each subsequent Fossil command entered at the prompt. <p> On Windows, the "fossil server", "fossil ui", and "fossil shell" commands do not (currently) work on an encrypted repository. </blockquote> <h2>Additional Security</h2><blockquote> Use the FOSSIL_SECURITY_LEVEL environment for additional protection. <blockquote><pre> export FOSSIL_SECURITY_LEVEL=1 </pre></blockquote> A setting of 1 or greater prevents fossil from trying to remember the previous sync password. <blockquote><pre> export FOSSIL_SECURITY_LEVEL=2 </pre></blockquote> A setting of 2 or greater causes all password prompts to be preceeded by a random translation matrix similar to the following: <blockquote><pre> abcde fghij klmno pqrst uvwyz qresw gjymu dpcoa fhkzv inlbt </pre></blockquote> When entering the password, the user must substitute the letter on the second line that corresponds to the letter on the first line. Uppercase substitutes for uppercase inputs, and lowercase substitutes for lowercase inputs. Letters that are not in the translation matrix (digits, punctuation, and "x") are not modified. For example, given the translation matrix above, if the password is "pilot-9crazy-xube", then the user must type "fmpav-9ekqtb-xirw". This simple substitution cypher helps prevent password capture by keyloggers. </blockquote> |
Changes to www/env-opts.md.
︙ | ︙ | |||
86 87 88 89 90 91 92 | processed. `--sqlstats`: (Sets `g.fSqlStats`.) Print a number of performance statistics about each SQLite database used when it is closed. `--sshtrace`: (Sets `g.fSshTrace`.) | | > > > > | 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 | processed. `--sqlstats`: (Sets `g.fSqlStats`.) Print a number of performance statistics about each SQLite database used when it is closed. `--sshtrace`: (Sets `g.fSshTrace`.) `--ssl-identity`: The fully qualified name of the file containing the client certificate and private key to use, in PEM format. It can be created by concatenating the client certificate and private key files. This identity will be presented to SSL servers to authenticate the client, in addition to the normal password authentication. `--systemtrace`: (Sets `g.fSystemTrace`.) Trace all commands launched as sub processes. `--user LOGIN`: (Sets `g.zLogin`) Also `-U LOGIN`. Set the user name used with the repository. |
︙ | ︙ | |||
256 257 258 259 260 261 262 263 264 265 266 267 268 269 | to enable TH1 documents in fossil. `TH1_ENABLE_HOOKS`: Override the local or global setting `tcl-hooks` to enable TH1 hooks in fossil. `TH1_ENABLE_TCL`: Override the local or global setting `tcl` to enable Tcl in fossil. `TMP`: On Windows, the location of temporary files. The first environment variable found in the environment that names an existing directory from the list `TMP`, `TEMP`, `USERPROFILE`, the Windows directory (usually `C:\WINDOWS`), `TEMP`, `TMP`, and the current directory (aka `.`) is the temporary folder. | > > > > > > > > | 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 | to enable TH1 documents in fossil. `TH1_ENABLE_HOOKS`: Override the local or global setting `tcl-hooks` to enable TH1 hooks in fossil. `TH1_ENABLE_TCL`: Override the local or global setting `tcl` to enable Tcl in fossil. `TH1_TEST_ANON_CAPS`: Override the default anonymous permissions used when processing the `--set-anon-caps` option for the `test-th-eval`, `test-th-render`, and `test-th-source` test commands. `TH1_TEST_USER_CAPS`: Override the default user permissions used when processing the `--set-user-caps` option for the `test-th-eval`, `test-th-render`, and `test-th-source` test commands. `TMP`: On Windows, the location of temporary files. The first environment variable found in the environment that names an existing directory from the list `TMP`, `TEMP`, `USERPROFILE`, the Windows directory (usually `C:\WINDOWS`), `TEMP`, `TMP`, and the current directory (aka `.`) is the temporary folder. |
︙ | ︙ |
Changes to www/event.wiki.
︙ | ︙ | |||
21 22 23 24 25 26 27 | * <b>Milestones</b>. Project milestones, such as releases or beta-test cycles, can be recorded as technotes. The timeline entry for the technote can be something simple like "Version 1.2.3" perhaps with a bright color background to draw attention to the entry and the wiki content can contain release notes, for example. * <b>Blog Entries</b>. Blog entries from developers describing the current | | | 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 | * <b>Milestones</b>. Project milestones, such as releases or beta-test cycles, can be recorded as technotes. The timeline entry for the technote can be something simple like "Version 1.2.3" perhaps with a bright color background to draw attention to the entry and the wiki content can contain release notes, for example. * <b>Blog Entries</b>. Blog entries from developers describing the current state of a project, or rational for various design decisions, or roadmaps for future development, can be entered as technotes. * <b>Process Checkpoints</b>. For projects that have a formal process, technotes can be used to record the completion or the initiation of various process steps. For example, a technote can be used to record the successful completion of a long-running test, perhaps with performance results and details of where the test was run and who |
︙ | ︙ | |||
47 48 49 50 51 52 53 | No project is required to use technotes. But technotes can help many projects stay better organized and provide a better historical record of the development progress. <h2>Viewing Technotes</h2> | | | | 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | No project is required to use technotes. But technotes can help many projects stay better organized and provide a better historical record of the development progress. <h2>Viewing Technotes</h2> Because technotes are considered a special kind of wiki, users must have permission to read wiki in order read technotes. Enable the "j" permission under the /Setup/Users menu in order to give specific users or user classes the ability to view wiki and technotes. Technotes show up on the timeline. Click on the hyperlink beside the technote title to see the complete text. <h2>Creating And Editing Technotes</h2> There is a hyperlink under the /wikihelp menu that can be used to create new technotes. And there is a submenu hyperlink on technote displays for editing existing technotes. Users must have check-in privileges (permission "i") in order to create or edit technotes. In addition, users must have create-wiki privilege (permission "f") to create new technotes and edit-wiki privilege (permission "k") in order to edit existing technotes. Technote content may be formatted as [/wiki_rules | Fossil wiki], [/md_rules | Markdown], or a plain text. |
Changes to www/faq.wiki.
︙ | ︙ | |||
23 24 25 26 27 28 29 | <blockquote> <b>fossil [/help/ui|ui]</b> <i>REPOSITORY-FILENAME</i> </blockquote> And your default web browser should pop up and automatically point to the fossil interface. (Hint: You can omit the <i>REPOSITORY-FILENAME</i> | | > > > | 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 | <blockquote> <b>fossil [/help/ui|ui]</b> <i>REPOSITORY-FILENAME</i> </blockquote> And your default web browser should pop up and automatically point to the fossil interface. (Hint: You can omit the <i>REPOSITORY-FILENAME</i> if you are within an open check-out.) See also: [http://fuelscm.org/] </blockquote></li> <a name="q2"></a> <p><b>(2) What is the difference between a "branch" and a "fork"?</b></p> <blockquote>This is a big question - too big to answer in a FAQ. Please read the <a href="branching.wiki">Branching, Forking, Merging, and Tagging</a> document.</blockquote></li> |
︙ | ︙ | |||
57 58 59 60 61 62 63 | off from. If you already have a fork in your check-in tree and you want to convert that fork to a branch, you can do this from the web interface. First locate the check-in that you want to be the initial check-in of your branch on the timeline and click on its link so that you are on the <b>ci</b> page. Then find the "<b>edit</b>" | | | | 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | off from. If you already have a fork in your check-in tree and you want to convert that fork to a branch, you can do this from the web interface. First locate the check-in that you want to be the initial check-in of your branch on the timeline and click on its link so that you are on the <b>ci</b> page. Then find the "<b>edit</b>" link (near the "Commands:" label) and click on that. On the "Edit Check-in" page, check the box beside "Branching:" and fill in the name of your new branch to the right and press the "Apply Changes" button.</blockquote></li> <a name="q4"></a> <p><b>(4) How do I tag a check-in?</b></p> <blockquote>There are several ways: |
︙ | ︙ | |||
84 85 86 87 88 89 90 | <b>fossil [/help/branch|tag] add</b> <i>TAGNAME</i> <i>CHECK-IN</i> </blockquote> The CHECK-IN in the previous line can be any [./checkin_names.wiki | valid check-in name format]. You can also add (and remove) tags from a check-in using the | | | | | 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 | <b>fossil [/help/branch|tag] add</b> <i>TAGNAME</i> <i>CHECK-IN</i> </blockquote> The CHECK-IN in the previous line can be any [./checkin_names.wiki | valid check-in name format]. You can also add (and remove) tags from a check-in using the [./webui.wiki | web interface]. First locate the check-in that you what to tag on the timeline, then click on the link to go the detailed information page for that check-in. Then find the "<b>edit</b>" link (near the "Commands:" label) and click on that. There are controls on the edit page that allow new tags to be added and existing tags to be removed.</blockquote></li> <a name="q5"></a> <p><b>(5) How do I create a private branch that won't get pushed back to the main repository.</b></p> <blockquote>Use the <b>--private</b> command-line option on the <b>commit</b> command. The result will be a check-in which exists on your local repository only and is never pushed to other repositories. All descendants of a private check-in are also private. Unless you specify something different using the <b>--branch</b> and/or <b>--bgcolor</b> options, the new private check-in will be put on a branch named "private" with an orange background color. You can merge from the trunk into your private branch in order to keep |
︙ | ︙ |
Changes to www/fileformat.wiki.
1 2 3 4 5 6 | <title>Fossil File Formats</title> <h1 align="center"> Fossil File Formats </h1> The global state of a fossil repository is kept simple so that it can | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 | <title>Fossil File Formats</title> <h1 align="center"> Fossil File Formats </h1> The global state of a fossil repository is kept simple so that it can endure in useful form for decades or centuries. A fossil repository is intended to be readable, searchable, and extensible by people not yet born. The global state of a fossil repository is an unordered set of <i>artifacts</i>. An artifact might be a source code file, the text of a wiki page, part of a trouble ticket, or one of several special control artifacts used to show the relationships between other artifacts within the project. Each artifact is normally represented on disk as a separate file. Artifacts can be text or binary. In addition to the global state, each fossil repository also contains local state. The local state consists of web-page formatting preferences, authorized users, ticket display and reporting formats, and so forth. The global state is shared in common among all repositories for the same project, whereas the local state is often different in separate repositories. The local state is not versioned and is not synchronized with the global state. The local state is not composed of artifacts and is not intended to be enduring. This document is concerned with global state only. Local state is only mentioned here in order to distinguish it from global state. Each artifact in the repository is named by its SHA1 hash. No prefixes or meta information is added to an artifact before its hash is computed. The name of an artifact in the repository is exactly the same SHA1 hash that is computed by sha1sum on the file as it exists in your source tree.</p> Some artifacts have a particular format which gives them special meaning to fossil. Fossil recognizes: <ul> <li> [#manifest | Manifests] </li> |
︙ | ︙ | |||
82 83 84 85 86 87 88 | A manifest is a text file. Newline characters (ASCII 0x0a) separate the file into "cards". Each card begins with a single character "card type". Zero or more arguments may follow the card type. All arguments are separated from each other and from the card-type character by a single space character. There is no surplus white space between arguments | | | 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 | A manifest is a text file. Newline characters (ASCII 0x0a) separate the file into "cards". Each card begins with a single character "card type". Zero or more arguments may follow the card type. All arguments are separated from each other and from the card-type character by a single space character. There is no surplus white space between arguments and no leading or trailing whitespace except for the newline character that acts as the card separator. All cards of the manifest occur in strict sorted lexicographical order. No card may be duplicated. The entire manifest may be PGP clear-signed, but otherwise it may contain no additional text or data beyond what is described here. |
︙ | ︙ | |||
112 113 114 115 116 117 118 | A manifest may optionally have a single B-card. The B-card specifies another manifest that serves as the "baseline" for this manifest. A manifest that has a B-card is called a delta-manifest and a manifest that omits the B-card is a baseline-manifest. The other manifest identified by the argument of the B-card must be a baseline-manifest. A baseline-manifest records the complete contents of a check-in. | | | | 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 | A manifest may optionally have a single B-card. The B-card specifies another manifest that serves as the "baseline" for this manifest. A manifest that has a B-card is called a delta-manifest and a manifest that omits the B-card is a baseline-manifest. The other manifest identified by the argument of the B-card must be a baseline-manifest. A baseline-manifest records the complete contents of a check-in. A delta-manifest records only changes from its baseline. A manifest must have exactly one C-card. The sole argument to the C-card is a check-in comment that describes the check-in that the manifest defines. The check-in comment is text. The following escape sequences are applied to the text: A space (ASCII 0x20) is represented as "\s" (ASCII 0x5C, 0x73). A newline (ASCII 0x0a) is "\n" (ASCII 0x5C, x6E). A backslash (ASCII 0x5C) is represented as two backslashes "\\". Apart from space and newline, no other whitespace characters are allowed in the check-in comment. Nor are any unprintable characters allowed in the comment. A manifest must have exactly one D-card. The sole argument to the D-card is a date-time stamp in the ISO8601 format. The |
︙ | ︙ | |||
164 165 166 167 168 169 170 | A manifest has zero or one N-cards. The N-card specifies the mimetype for the text in the comment of the C-card. If the N-card is omitted, a default mimetype is used. A manifest has zero or one P-cards. Most manifests have one P-card. The P-card has a varying number of arguments that | | | | | | | | | | | | | 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 | A manifest has zero or one N-cards. The N-card specifies the mimetype for the text in the comment of the C-card. If the N-card is omitted, a default mimetype is used. A manifest has zero or one P-cards. Most manifests have one P-card. The P-card has a varying number of arguments that define other manifests from which the current manifest is derived. Each argument is a 40-character lowercase hexadecimal SHA1 of a predecessor manifest. All arguments to the P-card must be unique within that card. The first argument is the SHA1 of the direct ancestor of the manifest. Other arguments define manifests with which the first was merged to yield the current manifest. Most manifests have a P-card with a single argument. The first manifest in the project has no ancestors and thus has no P-card or (depending on the Fossil version) an empty P-card (no arguments). A manifest has zero or more Q-cards. A Q-card is similar to a P-card in that it defines a predecessor to the current check-in. But whereas a P-card defines the immediate ancestor or a merge ancestor, the Q-card is used to identify a single check-in or a small range of check-ins which were cherry-picked for inclusion in or exclusion from the current manifest. The first argument of the Q-card is the artifact ID of another manifest (the "target") which has had its changes included or excluded in the current manifest. The target is preceded by "+" or "-" to show inclusion or exclusion, respectively. The optional second argument to the Q-card is another manifest artifact ID which is the "baseline" for the cherry-pick. If omitted, the baseline is the primary parent of the target. The changes included or excluded consist of all changes moving from the baseline to the target. The Q-card was added to the interface specification on 2011-02-26. Older versions of Fossil will reject manifests that contain Q-cards. A manifest may optionally have a single R-card. The R-card has a single argument which is the MD5 checksum of all files in the check-in except the manifest itself. The checksum is expressed as 32 characters of lowercase hexadecimal. The checksum is computed as follows: For each file in the check-in (except for the manifest itself) in strict sorted lexicographical order, take the pathname of the file relative to the root of the repository, append a single space (ASCII 0x20), the size of the file in ASCII decimal, a single newline character (ASCII 0x0A), and the complete text of the file. Compute the MD5 checksum of the result. A manifest might contain one or more T-cards used to set |
︙ | ︙ | |||
226 227 228 229 230 231 232 | Each manifest has a single U-card. The argument to the U-card is the login of the user who created the manifest. The login name is encoded using the same character escapes as is used for the check-in comment argument to the C-card. A manifest must have a single Z-card as its last line. The argument to the Z-card is a 32-character lowercase hexadecimal MD5 hash | | | | | | | | | | 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 | Each manifest has a single U-card. The argument to the U-card is the login of the user who created the manifest. The login name is encoded using the same character escapes as is used for the check-in comment argument to the C-card. A manifest must have a single Z-card as its last line. The argument to the Z-card is a 32-character lowercase hexadecimal MD5 hash of all prior lines of the manifest up to and including the newline character that immediately precedes the "Z". The Z-card is a sanity check to prove that the manifest is well-formed and consistent. A sample manifest from Fossil itself can be seen [/artifact/28987096ac | here]. <a name="cluster"></a> <h2>2.0 Clusters</h2> A cluster is an artifact that declares the existence of other artifacts. Clusters are used during repository synchronization to help reduce network traffic. As such, clusters are an optimization and may be removed from a repository without loss or damage to the underlying project code. Clusters follow a syntax that is very similar to manifests. A cluster is a line-oriented text file. Newline characters (ASCII 0x0a) separate the artifact into cards. Each card begins with a single character "card type". Zero or more arguments may follow the card type. All arguments are separated from each other and from the card-type character by a single space character. There is no surplus white space between arguments and no leading or trailing whitespace except for the newline character that acts as the card separator. All cards of a cluster occur in strict sorted lexicographical order. No card may be duplicated. The cluster may not contain additional text or data beyond what is described here. Unlike manifests, clusters are never PGP signed. Allowed cards in the cluster are as follows: <blockquote> <b>M</b> <i>artifact-id</i><br /> <b>Z</b> <i>checksum</i> </blockquote> A cluster contains one or more "M" cards followed by a single "Z" card. Each M card has a single argument which is the artifact ID of another artifact in the repository. The Z card works exactly like the Z card of a manifest. The argument to the Z card is the lower-case hexadecimal representation of the MD5 checksum of all prior cards in the cluster. The Z-card is required. An example cluster from Fossil can be seen [/artifact/d03dbdd73a2a8 | here]. |
︙ | ︙ | |||
313 314 315 316 317 318 319 | second argument is the 40 character lowercase artifact ID of the artifact to which the tag is to be applied. The first value is the tag name. The first character of the tag is either "+", "-", or "*". The "+" means the tag should be added to the artifact. The "-" means the tag should be removed. The "*" character means the tag should be added to the artifact and all direct descendants (but not descendants through a merge) down | | | | 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 | second argument is the 40 character lowercase artifact ID of the artifact to which the tag is to be applied. The first value is the tag name. The first character of the tag is either "+", "-", or "*". The "+" means the tag should be added to the artifact. The "-" means the tag should be removed. The "*" character means the tag should be added to the artifact and all direct descendants (but not descendants through a merge) down to but not including the first descendant that contains a more recent "-", "*", or "+" tag with the same name. The optional third argument is the value of the tag. A tag without a value is a Boolean. When two or more tags with the same name are applied to the same artifact, the tag with the latest (most recent) date is used. Some tags have special meaning. The "comment" tag when applied to a check-in will override the check-in comment of that check-in for display purposes. The "user" tag overrides the name of the check-in user. The "date" tag overrides the check-in date. The "branch" tag sets the name of the branch that at check-in belongs to. Symbolic tags begin with the "sym-" prefix. The U card is the name of the user that created the control artifact. The Z card is the usual required artifact checksum. An example control artifacts can be seen [/info/9d302ccda8 | here]. <a name="wikichng"></a> <h2>4.0 Wiki Pages</h2> |
︙ | ︙ | |||
358 359 360 361 362 363 364 | <b>Z</b> <i>checksum</i> </blockquote> The D card is the date and time when the wiki page was edited. The P card specifies the parent wiki pages, if any. The L card gives the name of the wiki page. The optional N card specifies the mimetype of the wiki text. If the N card is omitted, the | | | 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 | <b>Z</b> <i>checksum</i> </blockquote> The D card is the date and time when the wiki page was edited. The P card specifies the parent wiki pages, if any. The L card gives the name of the wiki page. The optional N card specifies the mimetype of the wiki text. If the N card is omitted, the mimetype is assumed to be text/x-fossil-wiki. The U card specifies the login of the user who made this edit to the wiki page. The Z card is the usual checksum over the entire artifact and is required. The W card is used to specify the text of the wiki page. The argument to the W card is an integer which is the number of bytes of text in the wiki page. That text follows the newline character |
︙ | ︙ | |||
403 404 405 406 407 408 409 | J cards specify changes to the "value" of "fields" in the ticket. If the <i>value</i> parameter of the J card is omitted, then the field is set to an empty string. Each fossil server has a ticket configuration which specifies the fields its understands. The ticket configuration is part of the local state for the repository and thus can vary from one repository to another. | | | | | 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 | J cards specify changes to the "value" of "fields" in the ticket. If the <i>value</i> parameter of the J card is omitted, then the field is set to an empty string. Each fossil server has a ticket configuration which specifies the fields its understands. The ticket configuration is part of the local state for the repository and thus can vary from one repository to another. Hence a J card might specify a <i>field</i> that do not exist in the local ticket configuration. If a J card specifies a <i>field</i> that is not in the local configuration, then that J card is simply ignored. The first argument of the J card is the field name. The second value is the field value. If the field name begins with "+" then the value is appended to the prior value. Otherwise, the value on the J card replaces any previous value of the field. The field name and value are both encoded using the character escapes defined for the C card of a manifest. An example ticket-change artifact can be seen [/artifact/91f1ec6af053 | here]. <a name="attachment"></a> <h2>6.0 Attachments</h2> An attachment artifact associates some other artifact that is the attachment (the source artifact) with a ticket or wiki page or technical note to which the attachment is connected (the target artifact). The following cards are allowed on an attachment artifact: <blockquote> <b>A</b> <i>filename target</i> ?<i>source</i>?<br /> <b>C</b> <i>comment</i><br /> <b>D</b> <i>time-and-date-stamp</i><br /> <b>N</b> <i>mimetype</i><br /> <b>U</b> <i>user-name</i><br /> <b>Z</b> <i>checksum</i> </blockquote> The A card specifies a filename for the attachment in its first argument. The second argument to the A card is the name of the wiki page or ticket or technical note to which the attachment is connected. The third argument is either missing or else it is the 40-character artifact ID of the attachment itself. A missing third argument means that the attachment should be deleted. The C card is an optional comment describing what the attachment is about. The C card is optional, but there can only be one. A single D card is required to give the date and time when the attachment |
︙ | ︙ | |||
485 486 487 488 489 490 491 | <b>W</b> <i>size</i> <b>\n</b> <i>text</i> <b>\n</b><br /> <b>Z</b> <i>checksum</i> </blockquote> The C card contains text that is displayed on the timeline for the technote. The C card is optional, but there can only be one. | | | 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 | <b>W</b> <i>size</i> <b>\n</b> <i>text</i> <b>\n</b><br /> <b>Z</b> <i>checksum</i> </blockquote> The C card contains text that is displayed on the timeline for the technote. The C card is optional, but there can only be one. A single D card is required to give the date and time when the technote artifact was created. This is different from the time at which the technote appears on the timeline. A single E card gives the time of the technote (the point on the timeline where the technote is displayed) and a unique identifier for the technote. When there are multiple artifacts with the same technote-id, the one with the most recent D card is the only one used. The technote-id must be a |
︙ | ︙ | |||
523 524 525 526 527 528 529 | name means that tags can only be add and they can only be non-propagating tags. In a technote, T cards are normally used to set the background display color for timelines. The optional U card gives name of the user who entered the technote. A single W card provides wiki text for the document associated with the | | | 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 | name means that tags can only be add and they can only be non-propagating tags. In a technote, T cards are normally used to set the background display color for timelines. The optional U card gives name of the user who entered the technote. A single W card provides wiki text for the document associated with the technote. The format of the W card is exactly the same as for a [#wikichng | wiki artifact]. The Z card is the required checksum over the rest of the artifact. <a name="summary"></a> <h2>8.0 Card Summary</h2> |
︙ | ︙ |
Changes to www/fiveminutes.wiki.
1 2 3 4 5 6 7 8 | <title>Up and running in 5 minutes as a single user</title> <p align="center"><b><i> The following document was contributed by Gilles Ganault on 2013-01-08. </i></b> </p><hr> <h1>Up and running in 5 minutes as a single user</h1> | | | | | | | | | | | | | | | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 | <title>Up and running in 5 minutes as a single user</title> <p align="center"><b><i> The following document was contributed by Gilles Ganault on 2013-01-08. </i></b> </p><hr> <h1>Up and running in 5 minutes as a single user</h1> <p>This short document explains the main basic Fossil commands for a single user, i.e. with no additional users, with no need to synchronize with some remote repository, and no need for branching/forking.</p> <h2>Create a new repository</h2> <p>fossil new c:\test.repo</p> <p>This will create the new SQLite binary file that holds the repository, i.e. files, tickets, wiki, etc. It can be located anywhere, although it's considered best practice to keep it outside the work directory where you will work on files after they've been checked out of the repository.</p> <h2>Open the repository</h2> <p>cd c:\temp\test.fossil</p> <p>fossil open c:\test.repo</p> <p>This will check out the last revision of all the files in the repository, if any, into the current work directory. In addition, it will create a binary file _FOSSIL_ to keep track of changes (on non-Windows systems it is called <tt>.fslckout</tt>).</p> <h2>Add new files</h2> <p>fossil add .</p> <p>To tell Fossil to add new files to the repository. The files aren't actually added until you run "commit". When using ".", it tells Fossil to add all the files in the current directory recursively, i.e. including all the files in all the subdirectories.</p> <p>Note: To tell Fossil to ignore some extensions:</p> <p>fossil settings ignore-glob "*.o,*.obj,*.exe" --global</p> <h2>Remove files that haven't been committed yet</h2> <p>fossil delete myfile.c</p> <p>This will simply remove the item from the list of files that were previously added through "fossil add".</p> <h2>Check current status</h2> <p>fossil changes</p> <p>This shows the list of changes that have been done and will be committed the next time you run "fossil commit". It's a useful command to run before running "fossil commit" just to check that things are OK before proceeding.</p> <h2>Commit changes</h2> <p>To actually apply the pending changes to the repository, e.g. new files marked for addition, checked-out files that have been edited and must be checked-in, etc.</p> <p>fossil commit -m "Added stuff"</p> If no file names are provided on the command-line then all changes will be checked in, otherwise just the listed file(s) will be checked in. <h2>Compare two revisions of a file</h2> <p>If you wish to compare the last revision of a file and its checked out version in your work directory:</p> <p>fossil gdiff myfile.c</p> <p>If you wish to compare two different revisions of a file in the repository:</p> <p>fossil finfo myfile: Note the first hash, which is the UUID of the commit when the file was committed</p> <p>fossil gdiff --from UUID#1 --to UUID#2 myfile.c</p> <h2>Cancel changes and go back to previous revision</h2> <p>fossil revert myfile.c</p> <p>Fossil does not prompt when reverting a file. It simply reminds the user about the "undo" command, just in case the revert was a mistake.</p> <h2>Close the repository</h2> <p>fossil close</p> <p>This will simply remove the _FOSSIL_ at the root of the work directory but will not delete the files in the work directory. From then on, any use of "fossil" will trigger an error since there is no longer any connection.</p> |
Changes to www/foss-cklist.wiki.
︙ | ︙ | |||
87 88 89 90 91 92 93 | <li><p>The project has a bug tracker. <li><p>The project has a website. <li><p>Release version numbers are in the traditional X.Y or X.Y.Z format. | | | | 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 | <li><p>The project has a bug tracker. <li><p>The project has a website. <li><p>Release version numbers are in the traditional X.Y or X.Y.Z format. <li><p>Releases can be downloaded as tarball using gzip or bzip2 compression. <li><p>Releases unpack into a versioned top-level directory. (ex: "projectname-1.2.3/"). <li><p>A statement of license appears at the top of every source code file and the complete text of the license is included in the source code tarball. <li><p>There are no incompatible licenses in the code. <li><p>The project has not been blithely proclaimed "public domain" without having gone through the tedious and exacting legal steps to actually put it in the public domain. <li><p>There is an accurate change log in the code and on the website. <li><p>There is documentation in the code and on the website. </ol> |
Changes to www/fossil-from-msvc.wiki.
︙ | ︙ | |||
9 10 11 12 13 14 15 | <ol type="1"> <li>Tools > Settings > Expert Settings</li> <li>Tools > External Tools, where the items in this list map to "External Tool X" that we'll add to our own Fossil menu later: </li> <ol type="1"> <li>Rename the default "[New Tool 1]" to eg. | | | | | | | | | | | | | 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 | <ol type="1"> <li>Tools > Settings > Expert Settings</li> <li>Tools > External Tools, where the items in this list map to "External Tool X" that we'll add to our own Fossil menu later: </li> <ol type="1"> <li>Rename the default "[New Tool 1]" to eg. "Commit" 2. </li> <li>Change Command to where Fossil is located eg. "c:\fossil.exe"</li> <li>Change Arguments to the required command, eg. "commit -m". The user will be prompted to type the comment that Commit expects</li> <li>Set "Initial Directory" to point it to the work directory where the source files are currently checked out by Fossil (eg. c:\Workspace). It's also possible to use system variables such as "$(ProjectDir)" instead of hard-coding the path</li> <li>Check "Prompt for arguments", since Commit requires typing a comment. Useless for commands like Changes that don't require arguments</li> <li>Uncheck "Close on Exit", so we can see what Fossil says before closing the DOS box. Note that "Use Output Window" will display the output in a child window within the IDE instead of opening a DOS box</li> <li>Click on OK</li> </ol> <li>Tools > Customize > Commands</li> <ol type="1"> <li>With "Menu bar = Menu Bar" selected, click on "Add New Menu". A new "Fossil" menu is displayed in the IDE's menu bar</li> <li>Click on "Modify Selection" to rename it "Fossil", and...</li> <li>Use the "Move Down" button to move it lower in the list</li> </ol> <li>Still in Customize dialog: In the "Menu bar" combo, select the new Fossil menu you just created, and Click on "Add Command...": From Categories, select Tools, and select "External Command 1". Click on Close. It's unfortunate that the IDE doesn't say which command maps to "External Command X".</li> </ol> |
Changes to www/fossil-v-git.wiki.
1 2 3 4 5 6 7 8 | <title>Fossil Versus Git</title> <h2>1.0 Don't Stress!</h2> If you start out using one DVCS and later decide you like the other better, you can easily [./inout.wiki | move your content]¹. Fossil and [http://git-scm.com | Git] are very similar in many respects, | | | | | | | | | | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 | <title>Fossil Versus Git</title> <h2>1.0 Don't Stress!</h2> If you start out using one DVCS and later decide you like the other better, you can easily [./inout.wiki | move your content]¹. Fossil and [http://git-scm.com | Git] are very similar in many respects, but they also have important differences. See the table below for a high-level summary and the text that follows for more details. Keep in mind that you are reading this on a Fossil website, so the information here might be biased in favor of Fossil. Ask around with people who have used both Fossil and Git for other opinions. ¹<small><i>Git does not support wiki, tickets, or tech-notes, so those elements will not transfer when exporting from Fossil to Git.</i></small> <h2>2.0 Executive Summary:</h2> <blockquote><table border=1 cellpadding=5 align=center> <tr><th width="50%">GIT</th><th width="50%">FOSSIL</th></tr> <tr><td>File versioning only</td> <td>Versioning, Tickets, Wiki, and Technotes</td></tr> <tr><td>Ad-hoc, pile-of-files key/value database</td> <td>Relational SQL database</td></tr> <tr><td>Bazaar-style development</td><td>Cathedral-style development</td></tr> <tr><td>Designed for Linux development</td> <td>Designed for SQLite development</td></tr> <tr><td>Lots of little tools</td><td>Stand-alone executable</td></tr> <tr><td>One check-out per repository</td> <td>Many check-outs per repository</td></tr> <tr><td>Remembers what you should have done</td> <td>Remembers what you actually did</td></tr> <tr><td>GPL</td><td>BSD</td></tr> </table></blockquote> <h2>3.0 Discussion</h2> <h3>3.1 Feature Set</h3> Git provides file versioning services only, whereas Fossil adds integrated [./wikitheory.wiki | wiki], [./bugtheory.wiki | ticketing & bug tracking], [./embeddeddoc.wiki | embedded documentation], and [./event.wiki | Technical notes]. These additional capabilities are available for Git as 3rd-party and/or user-installed add-ons, but with Fossil they are integrated into the design. One way to describe Fossil is that it is "[https://github.com/ | github]-in-a-box". If you clone Git's self-hosting repository you get just Git's source code. If you clone Fossil's self-hosting repository, you get the entire Fossil website - source code, documentation, ticket history, and so forth. For developers who choose to self-host projects (rather than using a 3rd-party service such as GitHub) Fossil is much easier to set up, since the stand-alone Fossil executable together with a 2-line CGI script suffice to instantiate a full-featured developer website. To accomplish the same using Git requires locating, installing, configuring, integrating, and managing a wide assortment of separate tools. Standing up a developer website using Fossil can be done in minutes, whereas doing the same using Git requires hours or days. <h3>3.2 Database</h3> The baseline data structures for Fossil and Git are the same (modulo formatting details). Both systems store check-ins as immutable objects referencing their immediate ancestors and named by their SHA1 hash. The difference is that Git stores its objects as individual files in the ".git" folder or compressed into bespoke "pack-files", whereas Fossil stores its objects in a relational ([https://www.sqlite.org/|SQLite]) database file. To put it another way, Git uses an ad-hoc pile-of-files key/value database whereas Fossil uses a proven, general-purpose SQL database. This difference is more than an implementation detail. It has important consequences. With Git, one can easily locate the ancestors of a particular check-in by following the pointers embedded in the check-in object, but it is difficult to go the other direction and locate the descendants of a check-in. It is so difficult, in fact, that neither native Git nor GitHub provide this capability. With Git, if you are looking at some historical check-in then you cannot ask "what came next" or "what are the children of this check-in". Fossil, on the other hand, parses essential information about check-ins (parents, children, committers, comments, files changed, etc.) into a relational database that can be easily queried using concise SQL statements to find both ancestors and descendents of a check-in. Leaf check-ins in Git that lack a "ref" become "detached", making them difficult to locate and subject to garbage collection. This "detached head" problem has caused untold grief for countless Git users. With Fossil, all check-ins are easily located using a variety of attributes (parents, children, committer, date, full-text search of the check-in comment) and so detached heads are simply not possible. The ease with which check-ins can be located and queried in Fossil has resulted in a huge variety of reports and status screens ([./webpage-ex.md|examples]) that show project state in ways that help developers maintain enhanced awareness and comprehension and avoid errors. <h3>3.3 Cathedral vs. Bazaar</h3> Fossil and Git promote different development styles. Git promotes a "bazaar" development style in which numerous anonymous developers make small and sometimes haphazard contributions. Fossil promotes a "cathedral" development model in which the project is closely supervised by an highly engaged architect and implemented by a clique of developers. Nota Bene: This is not to say that Git cannot be used for cathedral-style development or that Fossil cannot be used for bazaar-style development. They can be. But those modes are not their design intent nor the their low-friction path. Git encourages a style in which individual developers work in relative isolation, maintaining their own branches and occasionally rebasing and pushing selected changes up to the main repository. Developers using Git often have their own private branches that nobody else ever sees. Work becomes siloed. This is exactly what one wants when doing bazaar-style development. Fossil, in contrast, strives to keep all changes from all contributors mirrored in the main repository (in separate branches) at all times. Work in progress from one developer is readily visible to all other |
︙ | ︙ | |||
162 163 164 165 166 167 168 | <h3>3.5 Lots of little tools vs. Self-contained system</h3> Git consists of many small tools, each doing one small part of the job, which can be recombined (by experts) to perform powerful operations. Git has a lot of complexity and many dependencies and requires an "installer" script or program to get it running. | | | | | | | | | | | | 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 | <h3>3.5 Lots of little tools vs. Self-contained system</h3> Git consists of many small tools, each doing one small part of the job, which can be recombined (by experts) to perform powerful operations. Git has a lot of complexity and many dependencies and requires an "installer" script or program to get it running. Fossil is a single self-contained stand-alone executable with hardly any dependencies. Fossil can be (and often is) run inside a minimally configured chroot jail. To install Fossil, one merely puts the executable on $PATH. The designer of Git says that the unix philosophy is to have lots of small tools that collaborate to get the job done. The designer of Fossil says that the unix philosophy is "it just works". Both individuals have written their DVCSes to reflect their own view of the "unix philosophy". <h3>3.6 One vs. Many Check-outs per Repository</h3> A "repository" in Git is a pile-of-files in the ".git" subdirectory of a single check-out. The check-out and the repository are inseperable. With Fossil, a "repository" is a single SQLite database file that can be stored anywhere. There can be multiple active check-outs from the same repository, perhaps open on different branches or on different snapshots of the same branch. Long-running tests or builds can be running in one check-out while changes are being committed in another. <h3>3.7 What you should have done vs. What you actually did</h3> Git puts a lot of emphasis on maintaining a "clean" check-in history. Extraneous and experimental branches by individual developers often never make it into the main repository. And branches are often rebased before being pushed, to make it appear as if development had been linear. Git strives to record what the development of a project should have looked like had there been no mistakes. Fossil, in contrast, puts more emphasis on recording exactly what happened, including all of the messy errors, dead-ends, experimental branches, and so forth. One might argue that this makes the history of a Fossil project "messy". But another point of view is that this makes the history "accurate". In actual practice, the superior reporting tools available in Fossil mean that the added "mess" is not a factor. One commentator has mused that Git records history according to the victors, whereas Fossil records history as it actually happened. <h3>3.8 GPL vs. BSD</h3> Git is covered by the GPL license whereas Fossil is covered by a two-clause BSD license. Consider the difference between GPL and BSD licenses: GPL is designed to make writing easier at the expense of making reading harder. BSD is designed to make reading easier and the expense of making writing harder. To a first approximation, the GPL license grants the right to read source code to anyone who promises to give back enhancements. In other words, the act of reading GPL source code (a prerequiste for making changes) implies acceptance of the license which requires updates to be contributed back under the same license. (The details are more complex, but the foregoing captures the essence of the idea.) A big advantage of the GPL is that anybody can contribute to the code without having to sign additional legal documentation because they have implied their acceptance of the GPL |
︙ | ︙ | |||
247 248 249 250 251 252 253 | cliquish, cathedral-style approach more typical of BSD-licensed projects. <h2>4.0 Missing Features</h2> Most of the capabilities found in Git are also available in Fossil and the other way around. For example, both systems have local check-outs, remote repositories, push/pull/sync, bisect capabilities, and a "stash". | | | 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 | cliquish, cathedral-style approach more typical of BSD-licensed projects. <h2>4.0 Missing Features</h2> Most of the capabilities found in Git are also available in Fossil and the other way around. For example, both systems have local check-outs, remote repositories, push/pull/sync, bisect capabilities, and a "stash". Both systems store project history as a directed acyclic graph (DAG) of immutable check-in objects. But there are a few capabilities in one system that are missing from the other. <h3>4.1 Features found in Fossil but missing from Git</h3> |
︙ | ︙ | |||
269 270 271 272 273 274 275 | * <b>Wiki, Embedded documentation, Trouble-tickets, and Tech-Notes</b> Git only provides versioning of source code. Fossil strives to provide other related configuration management services as well. * <b>Named branches</b> | | | | 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 | * <b>Wiki, Embedded documentation, Trouble-tickets, and Tech-Notes</b> Git only provides versioning of source code. Fossil strives to provide other related configuration management services as well. * <b>Named branches</b> Branches in Fossil have persistent names that are propagated to collaborators via [/help?cmd=push|push] and [/help?cmd=pull|pull]. All developers see the same name on the same branch. Git, in contrast, uses only local branch names, so developers working on the same project can (and frequently do) use a different name for the same branch. * <b>The [/help?cmd=all|fossil all] command</b> Fossil keeps track of all repositories and check-outs and allows operations over all of them with a single command. For example, in Fossil is possible to request a pull of all repositories on a laptop from their respective servers, prior to taking the laptop off network. |
︙ | ︙ |
Changes to www/index.wiki.
1 2 3 4 5 6 | <title>Home</title> <h3>What Is Fossil?</h3> <div style='width:200px;float:right;border:2px solid #446979;padding:10px;margin:0px 10px;'> <ul> | | | | < | | > > < < | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 | <title>Home</title> <h3>What Is Fossil?</h3> <div style='width:200px;float:right;border:2px solid #446979;padding:10px;margin:0px 10px;'> <ul> <li> [/uv/download.html | Download] <li> [./quickstart.wiki | Quick Start] <li> [./build.wiki | Install] <li> [../COPYRIGHT-BSD2.txt | License] <li> [./faq.wiki | FAQ] <li> [./changes.wiki | Change Log] <li> [./hacker-howto.wiki | Hacker How-To] <li> [./hints.wiki | Tip & Hints] <li> [./permutedindex.html | Documentation Index] <li> [http://www.fossil-scm.org/schimpf-book/home | Jim Schimpf's book] <li> Mailing list <ul> <li> [http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/fossil-users | sign-up] <li> [http://www.mail-archive.com/fossil-users@lists.fossil-scm.org | archives] </ul> </ul> <img src="fossil3.gif" align="center"> </div> <p>Fossil is a simple, high-reliability, distributed software configuration management system with these advanced features: 1. <b>Integrated Bug Tracking, Wiki, and Technotes</b> - In addition to doing [./concepts.wiki | distributed version control] like Git and Mercurial, Fossil also supports [./bugtheory.wiki | bug tracking], [./wikitheory.wiki | wiki], and [./event.wiki | technotes]. 2. <b>Built-in Web Interface</b> - Fossil has a built-in and intuitive [./webui.wiki | web interface] with a rich assortment of information pages ([./webpage-ex.md|examples]) designed to promote situational awareness. This entire website is just a running instance of Fossil. The pages you see here are all [./wikitheory.wiki | wiki] or [./embeddeddoc.wiki | embedded documentation] or (in the case of the [/uv/download.html|download] page) [./unvers.wiki | unversioned files]. When you clone Fossil from one of its [./selfhost.wiki | self-hosting repositories], you get more than just source code - you get this entire website. 3. <b>Self-Contained</b> - Fossil is a single self-contained stand-alone executable. To install, simply download a <a href="http://www.fossil-scm.org/download.html">precompiled binary</a> for Linux, Mac, OpenBSD, or Windows and put it on your $PATH. [./build.wiki | Easy-to-compile source code] is also available. |
︙ | ︙ | |||
82 83 84 85 86 87 88 | the repository are consistent prior to each commit. 8. <b>Free and Open-Source</b> - Uses the [../COPYRIGHT-BSD2.txt|2-clause BSD license]. <hr> <h3>Links For Fossil Users:</h3> | | < < < | > > > > | 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 | the repository are consistent prior to each commit. 8. <b>Free and Open-Source</b> - Uses the [../COPYRIGHT-BSD2.txt|2-clause BSD license]. <hr> <h3>Links For Fossil Users:</h3> * [./permutedindex.html | Documentation index] with [/search?c=d | full text search]. * [./reviews.wiki | Testimonials] from satisfied Fossil users and [./quotes.wiki | Quotes] about Fossil and other DVCSes. * [./faq.wiki | Frequently Asked Questions] * The [./concepts.wiki | concepts] behind Fossil. * [./quickstart.wiki | Quick Start] guide to using Fossil. * [./qandc.wiki | Questions & Criticisms] directed at Fossil. * [./build.wiki | Compiling and Installing] * "Fuel" is cross-platform GUI front-end for Fossil written in Qt. [http://fuelscm.org/]. Fuel is an independent project run by a different group of developers. * Fossil supports [./embeddeddoc.wiki | embedded documentation] that is versioned along with project source code. * Fossil uses an [./fileformat.wiki | enduring file format] that is designed to be readable, searchable, and extensible by people not yet born. * A tutorial on [./branching.wiki | branching], what it means and how to do it using Fossil. |
︙ | ︙ |
Changes to www/inout.wiki.
1 2 | <title>Import And Export</title> | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 | <title>Import And Export</title> Fossil has the ability to import and export repositories from and to [http://git-scm.com/ | Git]. And since most other version control systems will also import/export from Git, that means that you can import/export a Fossil repository to most version control systems using Git as an intermediary. <h2>Git → Fossil</h2> To import a Git repository into Fossil, run commands like this: <blockquote><pre> cd git-repo git fast-export --all | fossil import --git new-repo.fossil </pre></blockquote> In other words, simply pipe the output of the "git fast-export" command into the "fossil import --git" command. The 3rd argument to the "fossil import" command is the name of a new Fossil repository that is created to hold the Git content. The --git option is not actually required. The git-fast-export file format is currently the only VCS interchange format that Fossil understands. But future versions of Fossil might be enhanced to understand other VCS interchange formats, and so for compatibility, use of the --git option is recommended. <h2>Fossil → Git</h2> To convert a Fossil repository into a Git repository, run commands like this: |
︙ | ︙ | |||
41 42 43 44 45 46 47 | "fossil export --git" command into the "git fast-import" command. Note that the "fossil export --git" command only exports the versioned files. Tickets and wiki and events are not exported, since Git does not understand those concepts. As with the "import" command, the --git option is not required | | | > > > > > | > > > | > > > | > > > | > > > | > > > | > > > > > > > > > > | > > | > > > > > > > | | 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 | "fossil export --git" command into the "git fast-import" command. Note that the "fossil export --git" command only exports the versioned files. Tickets and wiki and events are not exported, since Git does not understand those concepts. As with the "import" command, the --git option is not required since the git-fast-export file format is currently the only VCS interchange format that Fossil will generate. However, future versions of Fossil might add the ability to generate other VCS interchange formats, and so for compatibility, the use of the --git option recommended. <h2>Bidirectional Synchronization</h2> Fossil also has the ability to synchronize with a Git repository via repeated imports and/or exports. To do this, it uses marks files to store a record of artifacts which are known by both Git and Fossil to exist at a given point in time. To illustrate, consider the example of a remote Fossil repository that a user wants to import into a local Git repository. First, the user would clone the remote repository and import it into a new Git repository: <blockquote><pre> fossil clone /path/to/remote/repo.fossil repo.fossil mkdir repo cd repo fossil open ../repo.fossil mkdir ../repo.git cd ../repo.git git init . fossil export --git --export-marks ../repo/fossil.marks \ ../repo.fossil | git fast-import \ --export-marks=../repo/git.marks </pre></blockquote> Once the import has completed, the user would need to <tt>git checkout trunk</tt>. At any point after this, new changes can be imported from the remote Fossil repository: <blockquote><pre> cd ../repo fossil pull cd ../repo.git fossil export --git --import-marks ../repo/fossil.marks \ --export-marks ../repo/fossil.marks \ ../repo.fossil | git fast-import \ --import-marks=../repo/git.marks \ --export-marks=../repo/git.marks </pre></blockquote> Changes in the Git repository can be exported to the Fossil repository and then pushed to the remote: <blockquote><pre> git fast-export --import-marks=../repo/git.marks \ --export-marks=../repo/git.marks --all | fossil import --git \ --incremental --import-marks ../repo/fossil.marks \ --export-marks ../repo/fossil.marks ../repo.fossil cd ../repo fossil push </pre></blockquote> |
Changes to www/makefile.wiki.
1 2 3 4 5 6 | <title>The Fossil Build Process</title> <h1>1.0 Introduction</h1> The build process for Fossil is tricky in that the source code needs to be processed by three different preprocessor programs | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | <title>The Fossil Build Process</title> <h1>1.0 Introduction</h1> The build process for Fossil is tricky in that the source code needs to be processed by three different preprocessor programs before it is compiled. Most users will download a [http://www.fossil-scm.org/download.html | precompiled binary] so this is of no consequence to them, and even those who want to compile the code themselves can use one of the [./build.wiki | existing makefiles]. So must people do not need to be concerned with the build complexities of Fossil. But hard-core developers who desire a deep understanding of how Fossil is put together can benefit from reviewing this article. <a name="srctour"></a> <h1>2.0 Source Code Tour</h1> The source code for Fossil is found in the [/dir?ci=trunk&name=src | src/] subdirectory of the source tree. The src/ subdirectory contains all code, including the code for the separate preprocessor programs. Each preprocessor program is a separate C program implemented in a single file of C source code. The three preprocessor programs are: |
︙ | ︙ | |||
42 43 44 45 46 47 48 49 50 51 | The sqlite3.c and sqlite3.h source files are byte-for-byte copies of a standard [http://www.sqlite.org/amalgamation.html | amalgamation]. The shell.c source file is code for the SQLite [http://www.sqlite.org/sqlite.html | command-line shell] that is used to help implement the [/help/sqlite3 | fossil sql] command. The shell.c source file is also a byte-for-byte copy of the shell.c file from the SQLite release. The TH1 script engine is implemented using files: | > > > > > > > > | | | | | | | | | | 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 | The sqlite3.c and sqlite3.h source files are byte-for-byte copies of a standard [http://www.sqlite.org/amalgamation.html | amalgamation]. The shell.c source file is code for the SQLite [http://www.sqlite.org/sqlite.html | command-line shell] that is used to help implement the [/help/sqlite3 | fossil sql] command. The shell.c source file is also a byte-for-byte copy of the shell.c file from the SQLite release. The SQLite shell.c file uses the [https://github.com/antirez/linenoise | linenoise] library to implement line editing. linenoise comprises two source files which were copied from the upstream repository with only very minor portability edits: 7. linenoise.c 8. linenoise.h The TH1 script engine is implemented using files: 9. th.c 10. th.h These two files are imports like the SQLite source files, and so are not preprocessed. The VERSION.h header file is generated from other information sources using a small program called: 11. mkversion.c The builtin_data.h header file contains the definitions of C-language byte-array constants that contain various resources such as scripts and images. The builtin_data.h header file is generate from the original resource files using a small program called: 12 mkbuiltin.c The src/ subdirectory also contains documentation about the makeheaders preprocessor program: 13. [../src/makeheaders.html | makeheaders.html] Click on the link to read this documentation. In addition there is a [http://www.tcl-lang.org/ | Tcl] script used to build the various makefiles: 14. makemake.tcl Running this Tcl script will automatically regenerate all makefiles. In order to add a new source file to the Fossil implementation, simply edit makemake.tcl to add the new filename, then rerun the script, and all of the makefiles for all targets will be rebuild. Finally, there is one of the makefiles generated by makemake.tcl: 15. main.mk The main.mk makefile is invoked from the Makefile in the top-level directory. The main.mk is generated by makemake.tcl and should not be hand edited. Other makefiles generated by makemake.tcl are in other subdirectories (currently all in the win/ subdirectory). All the other files in the src/ subdirectory (79 files at the time of this writing) are C source code files that are subject to the preprocessing steps described below. In the sequel, we will call these other files "src.c" in order to have a convenient name. The reader should understand that whenever "src.c" or "src.h" is used in the text |
︙ | ︙ | |||
107 108 109 110 111 112 113 | "manifest.uuid", and "VERSION" source files in the root directory of the source tree. (The "manifest" and "manifest.uuid" files are automatically generated and updated by Fossil itself. See the [/help/setting | fossil set manifest] command for additional information.) The VERSION.h header file is generated by | | | | 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 | "manifest.uuid", and "VERSION" source files in the root directory of the source tree. (The "manifest" and "manifest.uuid" files are automatically generated and updated by Fossil itself. See the [/help/setting | fossil set manifest] command for additional information.) The VERSION.h header file is generated by a C program: src/mkversion.c. To run the VERSION.h generator, first compile the src/mkversion.c source file into a command-line program (named "mkversion.exe") then run: <blockquote><pre> mkversion.exe manifest.uuid manifest VERSION >VERSION.h </pre></blockquote> The pathnames in the above command might need to be adjusted to get the directories right. The point is that the manifest.uuid, manifest, and VERSION files in the root of the source tree are the three arguments and the generated VERSION.h file appears on standard output. The builtin_data.h header file is generated by a C program: src/mkbuiltin.c. The builtin_data.h file contains C-langauge byte-array definitions for the content of resource files used by Fossil. To generate the builtin_data.h file, first compile the mkbuiltin.c program, then run: <blockquote><pre> mkbuiltin.exe diff.tcl <i>OtherFiles...</i> >builtin_data.h </pre></blockquote> At the time of this writing, the "diff.tcl" script (a Tcl/Tk script used |
︙ | ︙ | |||
161 162 163 164 165 166 167 | </pre></blockquote> Note that "src.c" in the above is a stand-in for the (79) regular source files of Fossil - all source files except for the exceptions described in section 2.0 above. The output of the mkindex program is a header file that is #include-ed by | | | | 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 | </pre></blockquote> Note that "src.c" in the above is a stand-in for the (79) regular source files of Fossil - all source files except for the exceptions described in section 2.0 above. The output of the mkindex program is a header file that is #include-ed by the main.c source file during the final compilation step. <h2>4.2 The translate preprocessor</h2> The translate preprocessor looks for lines of source code that begin with "@" and converts those lines into string constants or (depending on context) into special "printf" operations for generating the output of an HTTP request. The translate preprocessor is a simple C program whose sources are in the translate.c source file. The translate preprocess is run on each of the other ordinary source files separately, like this: <blockquote><pre> ./translate src.c >src_.c </pre></blockquote> In this case, the "src.c" file represents any single source file from the set of ordinary source files as described in section 2.0 above. Note that each source file is translated separately. By convention, the names of the translated source files are the names of the input sources with a single "_" character at the end. But a new makefile can use any naming convention it wants - the "_" is not critical to the build process. After being translated, the output files (the "src_.c" files) should be used for all subsequent preprocessing and compilation steps. <h2>4.3 The makeheaders preprocessor</h2> |
︙ | ︙ | |||
207 208 209 210 211 212 213 | is like this: <blockquote><pre> makeheaders src_.c:src.h sqlite3.h th.h VERSION.h </pre></blockquote> In the example above the "src_.c" and "src.h" names represent all of the | | | > | > > > | 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 | is like this: <blockquote><pre> makeheaders src_.c:src.h sqlite3.h th.h VERSION.h </pre></blockquote> In the example above the "src_.c" and "src.h" names represent all of the (79) ordinary C source files, each as a separate argument. <h1>5.0 Compilation</h1> After all generated files have been created and all ordinary source files have been preprocessed, the generated and preprocessed files can be combined into a single executable using a C compiler. This can be done all at once, or each preprocessed source file can be compiled into a separate object code file and the resulting object code files linked together in a final step. Some files require special C-preprocessor macro definitions. When compiling sqlite.c, the following macros are recommended: * -DSQLITE_OMIT_LOAD_EXTENSION=1 * -DSQLITE_ENABLE_DBSTAT_VTAB=1 * -DSQLITE_ENABLE_FTS4=1 * -DSQLITE_ENABLE_LOCKING_STYLE=0 * -DSQLITE_LIKE_DOESNT_MATCH_BLOBS=1 * -DSQLITE_THREADSAFE=0 * -DSQLITE_DEFAULT_FILE_FORMAT=4 * -DSQLITE_ENABLE_EXPLAIN_COMMENTS=1 The first three symbol definitions above are required; the others are merely recommended. Extension loading is omitted as a security measure. The dbstat virtual table is needed for the [/help?cmd=/repo-tabsize|/repo-tabsize] page. FTS4 is needed for the search feature. Fossil is single-threaded so mutexing is disabled in SQLite as a performance enhancement. The SQLITE_ENABLE_EXPLAIN_COMMENTS option makes the output of "EXPLAIN" queries in the "[/help?cmd=sqlite3|fossil sql]" command much more readable. When compiling the shell.c source file, these macros are required: * -Dmain=sqlite3_main * -DSQLITE_OMIT_LOAD_EXTENSION=1 The "main()" routine in the shell must be changed into sqlite3_main() to prevent it from colliding with the real main() in Fossil, and to give Fossil an entry point to jump to when the [/help/sqlite3 | fossil sql] command is invoked. All the other source code files can be compiled without any special options. <h1>6.0 Linkage</h1> Fossil needs to be linked against [http://www.zlib.net | zlib]. If the HTTPS option is enabled, then it will also need to link against the appropriate SSL implementation. And, of course, Fossil needs to link against the standard C library. No other libraries or external dependences are used. Fossil includes a copy of [https://github.com/richgel999/miniz | miniz] which can be used as an alternative to zlib. <h1>7.0 See Also</h1> * [./tech_overview.wiki | A Technical Overview Of Fossil] * [./adding_code.wiki | How To Add Features To Fossil] |
Changes to www/mkdownload.tcl.
1 2 | #!/usr/bin/tclsh # | | > > | < < < < < < < < < < | < < < < < < < < < < < < < < < < < < | | > > > | > > > > > > | > > | > > > > > | | < < | < | | > | | | | | | | < > | < < < < | > > > | > | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 | #!/usr/bin/tclsh # # Run this script to build and install the "download.html" page of # unversioned comment. # # Also generate the fossil_download_checksums.html page. # # set out [open download.html w] fconfigure $out -encoding utf-8 -translation lf puts $out \ {<div class='fossil-doc' data-title='Download Page'> <center><font size=4>} puts $out \ "<b>To install Fossil →</b> download the stand-alone executable" puts $out \ {and put it on your $PATH. </font><p><small> RPMs available <a href="http://download.opensuse.org/repositories/home:/rmax:/fossil/"> here.</a> Cryptographic checksums for download files are <a href="http://www.hwaci.com/fossil_download_checksums.html">here</a>. </small></p> <table cellpadding="10"> } # Find all unique timestamps. # set in [open {|fossil uv list} rb] while {[gets $in line]>0} { set fn [lindex $line 5] set filesize($fn) [lindex $line 3] if {[regexp -- {-(\d\.\d+)\.(tar\.gz|zip)$} $fn all version]} { set filehash($fn) [lindex $line 1] set avers($version) 1 } } close $in set vdate(1.36) 2016-10-24 set vdate(1.35) 2016-06-14 set vdate(1.34) 2016-11-02 # Do all versions from newest to oldest # foreach vers [lsort -decr -real [array names avers]] { # set hr "../timeline?c=version-$vers;y=ci" set v2 v[string map {. _} $vers] set hr "../doc/trunk/www/changes.wiki#$v2" puts $out "<tr><td colspan=6 align=left><hr>" puts $out "<center><b><a href=\"$hr\">Version $vers</a>" if {[info exists vdate($vers)]} { set hr2 "../timeline?c=version-$vers&y=ci" puts $out " (<a href='$hr2'>$vdate($vers)</a>)" } puts $out "</b></center>" puts $out "</td></tr>" puts $out "<tr>" foreach {prefix img desc} { fossil-linux-x86 linux.gif {Linux 3.x x86} fossil-macosx mac.gif {Mac 10.x x86} fossil-openbsd-x86 openbsd.gif {OpenBSD 5.x x86} fossil-w32 win32.gif {Windows} fossil-src src.gif {Source Tarball} } { set glob download/$prefix*-$vers* set filename [array names filesize $glob] if {[info exists filesize($filename)]} { set size [set filesize($filename)] set units bytes if {$size>1024*1024} { set size [format %.2f [expr {$size/(1024.0*1024.0)}]] set units MiB } elseif {$size>1024} { set size [format %.2f [expr {$size/(1024.0)}]] set units KiB } puts $out "<td align=center valign=bottom><a href=\"$filename\">" puts $out "<img src=\"build-icons/$img\" border=0><br>$desc</a><br>" puts $out "$size $units</td>" } else { puts $out "<td> </td>" } } puts $out "</tr>" # # if {[info exists filesize(download/releasenotes-$vers.html)]} { # puts $out "<tr><td colspan=6 align=left>" # set rn [|open uv cat download/releasenotes-$vers.html] # fconfigure $rn -encoding utf-8 # puts $out "[read $rn]" # close $rn # puts $out "</td></tr>" # } } puts $out "<tr><td colspan=5><hr></td></tr>" puts $out {</table></center></div>} close $out # Generate the checksum page # set out [open fossil_download_checksums.html w] fconfigure $out -encoding utf-8 -translation lf puts $out {<html> <title>Fossil Download Checksums</title> <body> <h1 align="center">Checksums For Fossil Downloads</h1> <p>The following table shows the SHA1 checksums for the precompiled binaries available on the <a href="/download.html">Fossil website</a>.</p> <pre>} foreach {line} [split [exec fossil sql "SELECT hash, name FROM unversioned\ WHERE name GLOB '*.tar.gz' OR\ name GLOB '*.zip'"] \n] { set x [split $line |] set hash [lindex $x 0] set nm [file tail [lindex $x 1]] puts $out "$hash $nm" } puts $out {</pre></body></html>} close $out |
Changes to www/mkindex.tcl.
|
| | | > > | > > > | > > > > > | > > > | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 | #!/usr/bin/env tclsh # # Run this TCL script to generate a WIKI page that contains a # permuted index of the various documentation files. # # tclsh mkindex.tcl # set doclist { aboutcgi.wiki {How CGI Works In Fossil} adding_code.wiki {Adding New Features To Fossil} adding_code.wiki {Hacking Fossil} antibot.wiki {Defense against Spiders and Bots} blame.wiki {The Annotate/Blame Algorithm Of Fossil} branching.wiki {Branching, Forking, Merging, and Tagging} bugtheory.wiki {Bug Tracking In Fossil} build.wiki {Compiling and Installing Fossil} changes.wiki {Fossil Changelog} checkin_names.wiki {Check-in And Version Names} checkin.wiki {Check-in Checklist} childprojects.wiki {Child Projects} copyright-release.html {Contributor License Agreement} concepts.wiki {Fossil Core Concepts} contribute.wiki {Contributing Code or Documentation To The Fossil Project} customgraph.md {Theming: Customizing the Timeline Graph} customskin.md {Theming: Customizing The Appearance of Web Pages} custom_ticket.wiki {Customizing The Ticket System} delta_encoder_algorithm.wiki {Fossil Delta Encoding Algorithm} delta_format.wiki {Fossil Delta Format} embeddeddoc.wiki {Embedded Project Documentation} encryptedrepos.wiki {How To Use Encrypted Repositories} env-opts.md {Environment Variables and Global Options} event.wiki {Events} faq.wiki {Frequently Asked Questions} fileformat.wiki {Fossil File Format} fiveminutes.wiki {Update and Running in 5 Minutes as a Single User} foss-cklist.wiki {Checklist For Successful Open-Source Projects} fossil-from-msvc.wiki {Integrating Fossil in the Microsoft Express 2010 IDE} fossil-v-git.wiki {Fossil Versus Git} hacker-howto.wiki {Hacker How-To} /help {Lists of Commands and Webpages} hints.wiki {Fossil Tips And Usage Hints} index.wiki {Home Page} inout.wiki {Import And Export To And From Git} makefile.wiki {The Fossil Build Process} /md_rules {Markdown Formatting Rules} newrepo.wiki {How To Create A New Fossil Repository} password.wiki {Password Management And Authentication} pop.wiki {Principles Of Operation} private.wiki {Creating, Syncing, and Deleting Private Branches} qandc.wiki {Questions And Criticisms} quickstart.wiki {Fossil Quick Start Guide} quotes.wiki {Quotes: What People Are Saying About Fossil, Git, and DVCSes in General} ../test/release-checklist.wiki {Pre-Release Testing Checklist} reviews.wiki {Reviews} selfcheck.wiki {Fossil Repository Integrity Self Checks} selfhost.wiki {Fossil Self Hosting Repositories} server.wiki {How To Configure A Fossil Server} settings.wiki {Fossil Settings} /sitemap {Site Map} shunning.wiki {Shunning: Deleting Content From Fossil} stats.wiki {Performance Statistics} style.wiki {Source Code Style Guidelines} ssl.wiki {Using SSL with Fossil} sync.wiki {The Fossil Sync Protocol} tech_overview.wiki {A Technical Overview Of The Design And Implementation Of Fossil} tech_overview.wiki {SQLite Databases Used By Fossil} th1.md {The TH1 Scripting Language} tickets.wiki {The Fossil Ticket System} theory1.wiki {Thoughts On The Design Of The Fossil DVCS} unvers.wiki {Unversioned Files} webpage-ex.md {Webpage Examples} webui.wiki {The Fossil Web Interface} whyusefossil.wiki {Why You Should Use Fossil} whyusefossil.wiki {Benefits Of Version Control} wikitheory.wiki {Wiki In Fossil} /wiki_rules {Wiki Formatting Rules} } set permindex {} set stopwords { a about against and are as by for fossil from in of on or should the to use used with } foreach {file title} $doclist { set n [llength $title] regsub -all {\s+} $title { } title lappend permindex [list $title $file 1] for {set i 0} {$i<$n-1} {incr i} { set prefix [lrange $title 0 $i] set suffix [lrange $title [expr {$i+1}] end] set firstword [string tolower [lindex $suffix 0]] if {[lsearch $stopwords $firstword]<0} { lappend permindex [list "$suffix — $prefix" $file 0] } } } set permindex [lsort -dict -index 0 $permindex] set out [open permutedindex.html w] fconfigure $out -encoding utf-8 -translation lf puts $out \ |
︙ | ︙ | |||
106 107 108 109 110 111 112 | book</a> <li> <a href='$ROOT/help'>Command-line help</a> </ul> <a name="pindex"></a> <h2>Permuted Index:</h2> <ul>} foreach entry $permindex { | | > > | 119 120 121 122 123 124 125 126 127 128 129 130 131 | book</a> <li> <a href='$ROOT/help'>Command-line help</a> </ul> <a name="pindex"></a> <h2>Permuted Index:</h2> <ul>} foreach entry $permindex { foreach {title file bold} $entry break if {$bold} {set title <b>$title</b>} if {[string match /* $file]} {set file ../../..$file} puts $out "<li><a href=\"$file\">$title</a></li>" } puts $out "</ul></div>" |
Changes to www/newrepo.wiki.
︙ | ︙ | |||
51 52 53 54 55 56 57 | The next thing we need to do is <em>open</em> the repository. To do so we create a working directory and then <tt>cd</tt> to it: <verbatim> stephan@ludo:~/fossil$ mkdir demo stephan@ludo:~/fossil$ cd demo stephan@ludo:~/fossil/demo$ fossil open ../demo.fossil | | | 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 | The next thing we need to do is <em>open</em> the repository. To do so we create a working directory and then <tt>cd</tt> to it: <verbatim> stephan@ludo:~/fossil$ mkdir demo stephan@ludo:~/fossil$ cd demo stephan@ludo:~/fossil/demo$ fossil open ../demo.fossil stephan@ludo:~/fossil/demo$ </verbatim> That creates a file called <tt>_FOSSIL_</tt> in the current directory, and this file contains all kinds of fossil-related information about your local repository. You can ignore it for all purposes, but be sure not to accidentally remove it or otherwise damage it - it belongs to fossil, not you. |
︙ | ︙ |
Changes to www/password.wiki.
︙ | ︙ | |||
131 132 133 134 135 136 137 | will work for both older and newer clients. If the USER.PW on the server only holds the SHA1 hash of the password, then only newer clients will be able to authenticate to the server. The client normally gets the login and password from the "remote URL". <blockquote><pre> | | | | 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 | will work for both older and newer clients. If the USER.PW on the server only holds the SHA1 hash of the password, then only newer clients will be able to authenticate to the server. The client normally gets the login and password from the "remote URL". <blockquote><pre> http://<span style="color:blue">login:password</span>@servername.org/path </pre></blockquote> For older clients, the password is used for the shared secret as stated in the URL and with no encoding. For newer clients, the shared secret is derived from the password by transformed the password using the SHA1 hash encoding described above. However, if the first character of the password is "*" (ASCII 0x2a) then the "*" is skipped and the rest of the password is used directly as the share secret without the SHA1 encoding. <blockquote><pre> http://<span style="color:blue">login:*password</span>@servername.org/path </pre></blockquote> This *-before-the-password trick can be used by newer clients to sync against a legacy server that does not understand the new SHA1 password encoding. |
Changes to www/permutedindex.html.
︙ | ︙ | |||
17 18 19 20 21 22 23 | <li> <a href='$ROOT/help'>Command-line help</a> </ul> <a name="pindex"></a> <h2>Permuted Index:</h2> <ul> <li><a href="fiveminutes.wiki">5 Minutes as a Single User — Update and Running in</a></li> <li><a href="fossil-from-msvc.wiki">2010 IDE — Integrating Fossil in the Microsoft Express</a></li> | | | < < > | | > | | | > > | | | > | | | | > | | > > > | | | | | | | | | | | | < < | | | > | | > | | > > > | | | | | > | | > | > > | > | | | | | | | | | | | > | | > | > > > | | > | 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 | <li> <a href='$ROOT/help'>Command-line help</a> </ul> <a name="pindex"></a> <h2>Permuted Index:</h2> <ul> <li><a href="fiveminutes.wiki">5 Minutes as a Single User — Update and Running in</a></li> <li><a href="fossil-from-msvc.wiki">2010 IDE — Integrating Fossil in the Microsoft Express</a></li> <li><a href="tech_overview.wiki"><b>A Technical Overview Of The Design And Implementation Of Fossil</b></a></li> <li><a href="adding_code.wiki"><b>Adding New Features To Fossil</b></a></li> <li><a href="copyright-release.html">Agreement — Contributor License</a></li> <li><a href="delta_encoder_algorithm.wiki">Algorithm — Fossil Delta Encoding</a></li> <li><a href="blame.wiki">Algorithm Of Fossil — The Annotate/Blame</a></li> <li><a href="blame.wiki">Annotate/Blame Algorithm Of Fossil — The</a></li> <li><a href="customskin.md">Appearance of Web Pages — Theming: Customizing The</a></li> <li><a href="faq.wiki">Asked Questions — Frequently</a></li> <li><a href="password.wiki">Authentication — Password Management And</a></li> <li><a href="whyusefossil.wiki"><b>Benefits Of Version Control</b></a></li> <li><a href="antibot.wiki">Bots — Defense against Spiders and</a></li> <li><a href="private.wiki">Branches — Creating, Syncing, and Deleting Private</a></li> <li><a href="branching.wiki"><b>Branching, Forking, Merging, and Tagging</b></a></li> <li><a href="bugtheory.wiki"><b>Bug Tracking In Fossil</b></a></li> <li><a href="makefile.wiki">Build Process — The Fossil</a></li> <li><a href="aboutcgi.wiki">CGI Works In Fossil — How</a></li> <li><a href="changes.wiki">Changelog — Fossil</a></li> <li><a href="checkin_names.wiki"><b>Check-in And Version Names</b></a></li> <li><a href="checkin.wiki"><b>Check-in Checklist</b></a></li> <li><a href="checkin.wiki">Checklist — Check-in</a></li> <li><a href="../test/release-checklist.wiki">Checklist — Pre-Release Testing</a></li> <li><a href="foss-cklist.wiki"><b>Checklist For Successful Open-Source Projects</b></a></li> <li><a href="selfcheck.wiki">Checks — Fossil Repository Integrity Self</a></li> <li><a href="childprojects.wiki"><b>Child Projects</b></a></li> <li><a href="contribute.wiki">Code or Documentation To The Fossil Project — Contributing</a></li> <li><a href="style.wiki">Code Style Guidelines — Source</a></li> <li><a href="../../../help">Commands and Webpages — Lists of</a></li> <li><a href="build.wiki"><b>Compiling and Installing Fossil</b></a></li> <li><a href="concepts.wiki">Concepts — Fossil Core</a></li> <li><a href="server.wiki">Configure A Fossil Server — How To</a></li> <li><a href="shunning.wiki">Content From Fossil — Shunning: Deleting</a></li> <li><a href="contribute.wiki"><b>Contributing Code or Documentation To The Fossil Project</b></a></li> <li><a href="copyright-release.html"><b>Contributor License Agreement</b></a></li> <li><a href="whyusefossil.wiki">Control — Benefits Of Version</a></li> <li><a href="concepts.wiki">Core Concepts — Fossil</a></li> <li><a href="newrepo.wiki">Create A New Fossil Repository — How To</a></li> <li><a href="private.wiki"><b>Creating, Syncing, and Deleting Private Branches</b></a></li> <li><a href="qandc.wiki">Criticisms — Questions And</a></li> <li><a href="customskin.md">Customizing The Appearance of Web Pages — Theming:</a></li> <li><a href="custom_ticket.wiki"><b>Customizing The Ticket System</b></a></li> <li><a href="customgraph.md">Customizing the Timeline Graph — Theming:</a></li> <li><a href="tech_overview.wiki">Databases Used By Fossil — SQLite</a></li> <li><a href="antibot.wiki"><b>Defense against Spiders and Bots</b></a></li> <li><a href="shunning.wiki">Deleting Content From Fossil — Shunning:</a></li> <li><a href="private.wiki">Deleting Private Branches — Creating, Syncing, and</a></li> <li><a href="delta_encoder_algorithm.wiki">Delta Encoding Algorithm — Fossil</a></li> <li><a href="delta_format.wiki">Delta Format — Fossil</a></li> <li><a href="tech_overview.wiki">Design And Implementation Of Fossil — A Technical Overview Of The</a></li> <li><a href="theory1.wiki">Design Of The Fossil DVCS — Thoughts On The</a></li> <li><a href="embeddeddoc.wiki">Documentation — Embedded Project</a></li> <li><a href="contribute.wiki">Documentation To The Fossil Project — Contributing Code or</a></li> <li><a href="theory1.wiki">DVCS — Thoughts On The Design Of The Fossil</a></li> <li><a href="quotes.wiki">DVCSes in General — Quotes: What People Are Saying About Fossil, Git, and</a></li> <li><a href="embeddeddoc.wiki"><b>Embedded Project Documentation</b></a></li> <li><a href="delta_encoder_algorithm.wiki">Encoding Algorithm — Fossil Delta</a></li> <li><a href="encryptedrepos.wiki">Encrypted Repositories — How To Use</a></li> <li><a href="env-opts.md"><b>Environment Variables and Global Options</b></a></li> <li><a href="event.wiki"><b>Events</b></a></li> <li><a href="webpage-ex.md">Examples — Webpage</a></li> <li><a href="inout.wiki">Export To And From Git — Import And</a></li> <li><a href="fossil-from-msvc.wiki">Express 2010 IDE — Integrating Fossil in the Microsoft</a></li> <li><a href="adding_code.wiki">Features To Fossil — Adding New</a></li> <li><a href="fileformat.wiki">File Format — Fossil</a></li> <li><a href="unvers.wiki">Files — Unversioned</a></li> <li><a href="branching.wiki">Forking, Merging, and Tagging — Branching,</a></li> <li><a href="delta_format.wiki">Format — Fossil Delta</a></li> <li><a href="fileformat.wiki">Format — Fossil File</a></li> <li><a href="../../../md_rules">Formatting Rules — Markdown</a></li> <li><a href="../../../wiki_rules">Formatting Rules — Wiki</a></li> <li><a href="changes.wiki"><b>Fossil Changelog</b></a></li> <li><a href="concepts.wiki"><b>Fossil Core Concepts</b></a></li> <li><a href="delta_encoder_algorithm.wiki"><b>Fossil Delta Encoding Algorithm</b></a></li> <li><a href="delta_format.wiki"><b>Fossil Delta Format</b></a></li> <li><a href="fileformat.wiki"><b>Fossil File Format</b></a></li> <li><a href="quickstart.wiki"><b>Fossil Quick Start Guide</b></a></li> <li><a href="selfcheck.wiki"><b>Fossil Repository Integrity Self Checks</b></a></li> <li><a href="selfhost.wiki"><b>Fossil Self Hosting Repositories</b></a></li> <li><a href="settings.wiki"><b>Fossil Settings</b></a></li> <li><a href="hints.wiki"><b>Fossil Tips And Usage Hints</b></a></li> <li><a href="fossil-v-git.wiki"><b>Fossil Versus Git</b></a></li> <li><a href="quotes.wiki">Fossil, Git, and DVCSes in General — Quotes: What People Are Saying About</a></li> <li><a href="faq.wiki"><b>Frequently Asked Questions</b></a></li> <li><a href="quotes.wiki">General — Quotes: What People Are Saying About Fossil, Git, and DVCSes in</a></li> <li><a href="fossil-v-git.wiki">Git — Fossil Versus</a></li> <li><a href="inout.wiki">Git — Import And Export To And From</a></li> <li><a href="quotes.wiki">Git, and DVCSes in General — Quotes: What People Are Saying About Fossil,</a></li> <li><a href="env-opts.md">Global Options — Environment Variables and</a></li> <li><a href="customgraph.md">Graph — Theming: Customizing the Timeline</a></li> <li><a href="quickstart.wiki">Guide — Fossil Quick Start</a></li> <li><a href="style.wiki">Guidelines — Source Code Style</a></li> <li><a href="hacker-howto.wiki"><b>Hacker How-To</b></a></li> <li><a href="adding_code.wiki"><b>Hacking Fossil</b></a></li> <li><a href="hints.wiki">Hints — Fossil Tips And Usage</a></li> <li><a href="index.wiki"><b>Home Page</b></a></li> <li><a href="selfhost.wiki">Hosting Repositories — Fossil Self</a></li> <li><a href="aboutcgi.wiki"><b>How CGI Works In Fossil</b></a></li> <li><a href="server.wiki"><b>How To Configure A Fossil Server</b></a></li> <li><a href="newrepo.wiki"><b>How To Create A New Fossil Repository</b></a></li> <li><a href="encryptedrepos.wiki"><b>How To Use Encrypted Repositories</b></a></li> <li><a href="hacker-howto.wiki">How-To — Hacker</a></li> <li><a href="fossil-from-msvc.wiki">IDE — Integrating Fossil in the Microsoft Express 2010</a></li> <li><a href="tech_overview.wiki">Implementation Of Fossil — A Technical Overview Of The Design And</a></li> <li><a href="inout.wiki"><b>Import And Export To And From Git</b></a></li> <li><a href="build.wiki">Installing Fossil — Compiling and</a></li> <li><a href="fossil-from-msvc.wiki"><b>Integrating Fossil in the Microsoft Express 2010 IDE</b></a></li> <li><a href="selfcheck.wiki">Integrity Self Checks — Fossil Repository</a></li> <li><a href="webui.wiki">Interface — The Fossil Web</a></li> <li><a href="th1.md">Language — The TH1 Scripting</a></li> <li><a href="copyright-release.html">License Agreement — Contributor</a></li> <li><a href="../../../help"><b>Lists of Commands and Webpages</b></a></li> <li><a href="password.wiki">Management And Authentication — Password</a></li> <li><a href="../../../sitemap">Map — Site</a></li> <li><a href="../../../md_rules"><b>Markdown Formatting Rules</b></a></li> <li><a href="branching.wiki">Merging, and Tagging — Branching, Forking,</a></li> <li><a href="fossil-from-msvc.wiki">Microsoft Express 2010 IDE — Integrating Fossil in the</a></li> <li><a href="fiveminutes.wiki">Minutes as a Single User — Update and Running in 5</a></li> <li><a href="checkin_names.wiki">Names — Check-in And Version</a></li> <li><a href="adding_code.wiki">New Features To Fossil — Adding</a></li> <li><a href="newrepo.wiki">New Fossil Repository — How To Create A</a></li> <li><a href="foss-cklist.wiki">Open-Source Projects — Checklist For Successful</a></li> <li><a href="pop.wiki">Operation — Principles Of</a></li> <li><a href="env-opts.md">Options — Environment Variables and Global</a></li> <li><a href="tech_overview.wiki">Overview Of The Design And Implementation Of Fossil — A Technical</a></li> <li><a href="index.wiki">Page — Home</a></li> <li><a href="customskin.md">Pages — Theming: Customizing The Appearance of Web</a></li> <li><a href="password.wiki"><b>Password Management And Authentication</b></a></li> <li><a href="quotes.wiki">People Are Saying About Fossil, Git, and DVCSes in General — Quotes: What</a></li> <li><a href="stats.wiki"><b>Performance Statistics</b></a></li> <li><a href="../test/release-checklist.wiki"><b>Pre-Release Testing Checklist</b></a></li> <li><a href="pop.wiki"><b>Principles Of Operation</b></a></li> <li><a href="private.wiki">Private Branches — Creating, Syncing, and Deleting</a></li> <li><a href="makefile.wiki">Process — The Fossil Build</a></li> <li><a href="contribute.wiki">Project — Contributing Code or Documentation To The Fossil</a></li> <li><a href="embeddeddoc.wiki">Project Documentation — Embedded</a></li> <li><a href="foss-cklist.wiki">Projects — Checklist For Successful Open-Source</a></li> <li><a href="childprojects.wiki">Projects — Child</a></li> <li><a href="sync.wiki">Protocol — The Fossil Sync</a></li> <li><a href="faq.wiki">Questions — Frequently Asked</a></li> <li><a href="qandc.wiki"><b>Questions And Criticisms</b></a></li> <li><a href="quickstart.wiki">Quick Start Guide — Fossil</a></li> <li><a href="quotes.wiki"><b>Quotes: What People Are Saying About Fossil, Git, and DVCSes in General</b></a></li> <li><a href="selfhost.wiki">Repositories — Fossil Self Hosting</a></li> <li><a href="encryptedrepos.wiki">Repositories — How To Use Encrypted</a></li> <li><a href="newrepo.wiki">Repository — How To Create A New Fossil</a></li> <li><a href="selfcheck.wiki">Repository Integrity Self Checks — Fossil</a></li> <li><a href="reviews.wiki"><b>Reviews</b></a></li> <li><a href="../../../md_rules">Rules — Markdown Formatting</a></li> <li><a href="../../../wiki_rules">Rules — Wiki Formatting</a></li> <li><a href="fiveminutes.wiki">Running in 5 Minutes as a Single User — Update and</a></li> <li><a href="quotes.wiki">Saying About Fossil, Git, and DVCSes in General — Quotes: What People Are</a></li> <li><a href="th1.md">Scripting Language — The TH1</a></li> <li><a href="selfcheck.wiki">Self Checks — Fossil Repository Integrity</a></li> <li><a href="selfhost.wiki">Self Hosting Repositories — Fossil</a></li> <li><a href="server.wiki">Server — How To Configure A Fossil</a></li> <li><a href="settings.wiki">Settings — Fossil</a></li> <li><a href="shunning.wiki"><b>Shunning: Deleting Content From Fossil</b></a></li> <li><a href="fiveminutes.wiki">Single User — Update and Running in 5 Minutes as a</a></li> <li><a href="../../../sitemap"><b>Site Map</b></a></li> <li><a href="style.wiki"><b>Source Code Style Guidelines</b></a></li> <li><a href="antibot.wiki">Spiders and Bots — Defense against</a></li> <li><a href="tech_overview.wiki"><b>SQLite Databases Used By Fossil</b></a></li> <li><a href="ssl.wiki">SSL with Fossil — Using</a></li> <li><a href="quickstart.wiki">Start Guide — Fossil Quick</a></li> <li><a href="stats.wiki">Statistics — Performance</a></li> <li><a href="style.wiki">Style Guidelines — Source Code</a></li> <li><a href="foss-cklist.wiki">Successful Open-Source Projects — Checklist For</a></li> <li><a href="sync.wiki">Sync Protocol — The Fossil</a></li> <li><a href="private.wiki">Syncing, and Deleting Private Branches — Creating,</a></li> <li><a href="custom_ticket.wiki">System — Customizing The Ticket</a></li> <li><a href="tickets.wiki">System — The Fossil Ticket</a></li> <li><a href="branching.wiki">Tagging — Branching, Forking, Merging, and</a></li> <li><a href="tech_overview.wiki">Technical Overview Of The Design And Implementation Of Fossil — A</a></li> <li><a href="../test/release-checklist.wiki">Testing Checklist — Pre-Release</a></li> <li><a href="th1.md">TH1 Scripting Language — The</a></li> <li><a href="blame.wiki"><b>The Annotate/Blame Algorithm Of Fossil</b></a></li> <li><a href="makefile.wiki"><b>The Fossil Build Process</b></a></li> <li><a href="sync.wiki"><b>The Fossil Sync Protocol</b></a></li> <li><a href="tickets.wiki"><b>The Fossil Ticket System</b></a></li> <li><a href="webui.wiki"><b>The Fossil Web Interface</b></a></li> <li><a href="th1.md"><b>The TH1 Scripting Language</b></a></li> <li><a href="customskin.md"><b>Theming: Customizing The Appearance of Web Pages</b></a></li> <li><a href="customgraph.md"><b>Theming: Customizing the Timeline Graph</b></a></li> <li><a href="theory1.wiki"><b>Thoughts On The Design Of The Fossil DVCS</b></a></li> <li><a href="custom_ticket.wiki">Ticket System — Customizing The</a></li> <li><a href="tickets.wiki">Ticket System — The Fossil</a></li> <li><a href="customgraph.md">Timeline Graph — Theming: Customizing the</a></li> <li><a href="hints.wiki">Tips And Usage Hints — Fossil</a></li> <li><a href="bugtheory.wiki">Tracking In Fossil — Bug</a></li> <li><a href="unvers.wiki"><b>Unversioned Files</b></a></li> <li><a href="fiveminutes.wiki"><b>Update and Running in 5 Minutes as a Single User</b></a></li> <li><a href="hints.wiki">Usage Hints — Fossil Tips And</a></li> <li><a href="fiveminutes.wiki">User — Update and Running in 5 Minutes as a Single</a></li> <li><a href="ssl.wiki"><b>Using SSL with Fossil</b></a></li> <li><a href="env-opts.md">Variables and Global Options — Environment</a></li> <li><a href="whyusefossil.wiki">Version Control — Benefits Of</a></li> <li><a href="checkin_names.wiki">Version Names — Check-in And</a></li> <li><a href="fossil-v-git.wiki">Versus Git — Fossil</a></li> <li><a href="webui.wiki">Web Interface — The Fossil</a></li> <li><a href="customskin.md">Web Pages — Theming: Customizing The Appearance of</a></li> <li><a href="webpage-ex.md"><b>Webpage Examples</b></a></li> <li><a href="../../../help">Webpages — Lists of Commands and</a></li> <li><a href="quotes.wiki">What People Are Saying About Fossil, Git, and DVCSes in General — Quotes:</a></li> <li><a href="whyusefossil.wiki"><b>Why You Should Use Fossil</b></a></li> <li><a href="../../../wiki_rules"><b>Wiki Formatting Rules</b></a></li> <li><a href="wikitheory.wiki"><b>Wiki In Fossil</b></a></li> <li><a href="aboutcgi.wiki">Works In Fossil — How CGI</a></li> <li><a href="whyusefossil.wiki">You Should Use Fossil — Why</a></li> </ul></div> |
Changes to www/pop.wiki.
|
| | > | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 | <title>Principles Of Operation</title> <h1 align="center">Principles Of Operation</h1> <p> This page attempts to define the foundational principals upon which Fossil is built. </p> <ul> <li><p>A project consists of source files, wiki pages, and trouble tickets, and control files (collectively "artifacts"). All historical copies of all artifacts are saved. The project maintains an audit trail.</p></li> <li><p>A project resides in one or more repositories. Each repository is administered and operates independently of the others.</p></li> <li><p>Each repository has both global and local state. The global state is common to all repositories (or at least has the potential to be shared in common when the repositories are fully synchronized). The local state for each repository is private to that repository. The global state represents the content of the project. The local state identifies the authorized users and access policies for a particular repository.</p></li> <li><p>The global state of a repository is an unordered collection of artifacts. Each artifact is named by its SHA1 hash encoded in lowercase hexadecimal. In many contexts, the name can be abbreviated to a unique prefix. A five- or six-character prefix usually suffices to uniquely identify a file.</p></li> <li><p>Because artifacts are named by their SHA1 hash, all artifacts are immutable. Any change to the content of an artifact also changes the hash that forms the artifacts name, thus creating a new artifact. Both the old original version of the artifact and the new change are preserved under different names.</p></li> <li><p>It is theoretically possible for two artifacts with different content to share the same hash. But finding two such artifacts is so incredibly difficult and unlikely that we consider it to be an impossibility.</p></li> <li><p>The signature of an artifact is the SHA1 hash of the artifact itself, exactly as it would appear in a disk file. No prefix or meta-information about the artifact is added before computing the hash. So you can always find the SHA1 signature of a file by using the "sha1sum" command-line utility.</p></li> <li><p>The artifacts that comprise the global state of a repository |
︙ | ︙ |
Changes to www/private.wiki.
︙ | ︙ | |||
39 40 41 42 43 44 45 | visible to other users of the project. <h2>Syncing Private Branches</h2> A private branch normally stays on the one repository where it was originally created. But sometimes you want to share private branches with another repository. For example, you might be building a cross-platform | | | 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 | visible to other users of the project. <h2>Syncing Private Branches</h2> A private branch normally stays on the one repository where it was originally created. But sometimes you want to share private branches with another repository. For example, you might be building a cross-platform application and have separate repositories on your Windows laptop, your Linux desktop, and your iMac. You can transfer private branches between these machines by using the --private option on the "sync", "push", "pull", and "clone" commands. For example, if you are running "fossil server" on your Linux box and you want to clone that repository to your Mac, including all private branches, use: <blockquote><pre> |
︙ | ︙ | |||
65 66 67 68 69 70 71 | you leave the "x" capability turned off on all repositories used for collaboration (repositories to which many people push and pull) and only enable "x" for local repositories when you need to share private branches. Private branch sync only works if you use the --private command-line option. Private branches are never synced via the auto-sync mechanism. Once | | | | 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 | you leave the "x" capability turned off on all repositories used for collaboration (repositories to which many people push and pull) and only enable "x" for local repositories when you need to share private branches. Private branch sync only works if you use the --private command-line option. Private branches are never synced via the auto-sync mechanism. Once again, this restriction is designed to make it hard to accidently push private branches beyond their intended audience. <h2>Purging Private Branches</h2> You can remove all private branches from a repository using this command: <blockquote><pre> fossil scrub --private </pre></blockquote> Note that the above is a permanent and irreversible change. You will be asked to confirm before continuing. Once the private branches are removed, they cannot be retrieved (unless you have synced them to another repository.) So be careful with the command. <h2>Additional Notes</h2> All of the features above apply to <u>all</u> private branches in a single repository at once. There is no mechanism in Fossil (currently) that allows you to push, pull, clone, sync, or scrub and individual private branch within a repository that contains multiple private branches. |
Changes to www/qandc.wiki.
1 2 3 4 5 6 7 | <nowiki> <h1 align="center">Questions And Criticisms</h1> <p>This page is a collection of real questions and criticisms that have been raised against fossil together with responses from the program's author.</p> <p>Note: See also the <a href="faq.wiki">Frequently Asked Questions</a>.</p> | > | 1 2 3 4 5 6 7 8 | <title>Questions And Criticisms</title> <nowiki> <h1 align="center">Questions And Criticisms</h1> <p>This page is a collection of real questions and criticisms that have been raised against fossil together with responses from the program's author.</p> <p>Note: See also the <a href="faq.wiki">Frequently Asked Questions</a>.</p> |
︙ | ︙ | |||
20 21 22 23 24 25 26 | <ol> <li> Integrated <a href="wikitheory.wiki">wiki</a>. </li> <li> Integrated <a href="bugtheory.wiki">bug tracking</a> </li> <li> Immutable artifacts </li> <li> Self-contained, stand-alone executable that can be run in a <a href="http://en.wikipedia.org/wiki/Chroot">chroot jail</a> </li> | | | | | | | | 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 | <ol> <li> Integrated <a href="wikitheory.wiki">wiki</a>. </li> <li> Integrated <a href="bugtheory.wiki">bug tracking</a> </li> <li> Immutable artifacts </li> <li> Self-contained, stand-alone executable that can be run in a <a href="http://en.wikipedia.org/wiki/Chroot">chroot jail</a> </li> <li> Simple, well-defined, <a href="fileformat.wiki">enduring file format</a> </li> <li> Integrated <a href="webui.wiki">web interface</a> </li> </ol> </blockquote> <b>Why should I use this rather than Trac?</b> <blockquote> <ol> <li> Fossil is distributed. You can view and/or edit tickets, wiki, and code while off network, then sync your changes later. With Trac, you can only view and edit tickets and wiki while you are connected to the server. </li> <li> Fossil is lightweight and fully self-contained. It is very easy to setup on a low-resource machine. Fossil does not require an administrator.</li> <li> Fossil integrates code versioning into the same repository with wiki and tickets. There is nothing extra to add or install. Fossil is an all-in-one turnkey solution. </li> </ol> </blockquote> <b>Love the concept here. Anyone using this for real work yet?</b> <blockquote> Fossil is <a href="http://www.fossil-scm.org/">self-hosting</a>. In fact, this page was probably delivered to your web-browser via a working fossil instance. The same virtual machine that hosts http://www.fossil-scm.org/ (a <a href="http://www.linode.com/">Linode 720</a>) also hosts 24 other fossil repositories for various small projects. The documentation files for <a href="http://www.sqlite.org/">SQLite</a> are hosted in a fossil repository <a href="http://www.sqlite.org/docsrc/">here</a>, for example. Other projects are also adopting fossil. But fossil does not yet have the massive user base of git or mercurial. </blockquote> <b>Fossil looks like the bug tracker that would be in your Linksys Router's administration screen.</b> <blockquote> <p>I take a pragmatic approach to software: form follows function. To me, it is more important to have a reliable, fast, efficient, enduring, and simple DVCS than one that looks pretty.</p> <p>On the other hand, if you have patches that improve the appearance of Fossil without seriously compromising its reliability, performance, and/or maintainability, I will be happy to accept them. Fossil is self-hosting. Send email to request a password that will let you push to the main fossil repository.</p> </blockquote> <b>It would be useful to have a separate application that keeps the bug-tracking database in a versioned file. That file can then be pushed and pulled along with the rest repository.</b> <blockquote> <p>Fossil already <u>does</u> push and pull bugs along with the files in your repository. But fossil does <u>not</u> track bugs as files in the source tree. That approach to bug tracking was rejected for three reasons:</p> <ol> <li> Check-ins in fossil are immutable. So if tickets were part of the check-in, then there would be no way to add new tickets to a check-in as new bugs are discovered. |
︙ | ︙ | |||
106 107 108 109 110 111 112 | be permitted to create tickets. </ol> <p>These points are reiterated in the opening paragraphs of the <a href="bugtheory.wiki">Bug-Tracking In Fossil</a> document.</p> </blockquote> | | | 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 | be permitted to create tickets. </ol> <p>These points are reiterated in the opening paragraphs of the <a href="bugtheory.wiki">Bug-Tracking In Fossil</a> document.</p> </blockquote> <b>Fossil is already the name of a plan9 versioned append-only filesystem.</b> <blockquote> I did not know that. Perhaps they selected the name for the same reason that I did: because a repository with immutable artifacts preserves an excellent fossil record of a long-running project. </blockquote> |
︙ | ︙ | |||
135 136 137 138 139 140 141 | directly in the VCS - either they are under-featured compared to full software like Trac, or the VCS is massively bloated compared to Subversion or Bazaar.</b> <blockquote> <p>I have no doubt that Trac has many features that fossil lacks. But that is not the point. Fossil has several key features that Trac lacks and that | | | | | 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 | directly in the VCS - either they are under-featured compared to full software like Trac, or the VCS is massively bloated compared to Subversion or Bazaar.</b> <blockquote> <p>I have no doubt that Trac has many features that fossil lacks. But that is not the point. Fossil has several key features that Trac lacks and that I need: most notably the fact that fossil supports disconnected operation.</p> <p>As for bloat: Fossil is a single self-contained executable. You do not need any other packages (diff, patch, merge, cvs, svn, rcs, git, python, perl, tcl, apache, sqlite, and so forth) in order to run fossil. Fossil runs just fine in a chroot jail all by itself. And the self-contained fossil executable is much less than 1MB in size. (Update 2015-01-12: Fossil has grown in the years since the previous sentence was written but is still much less than 2MB according to "size" when compiled using -Os on x64 Linux.) Fossil is the very opposite of bloat.</p> </blockquote> </nowiki> |
Changes to www/quickstart.wiki.
1 2 3 4 5 6 7 8 9 | <title>Fossil Quick Start Guide</title> <h1 align="center">Fossil Quick Start</h1> <p>This is a guide to get you started using fossil quickly and painlessly.</p> <h2>Installing</h2> <p>Fossil is a single self-contained C program. You need to | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | <title>Fossil Quick Start Guide</title> <h1 align="center">Fossil Quick Start</h1> <p>This is a guide to get you started using fossil quickly and painlessly.</p> <h2>Installing</h2> <p>Fossil is a single self-contained C program. You need to either download a <a href="http://www.fossil-scm.org/download.html">precompiled binary</a> or <a href="build.wiki">compile it yourself</a> from sources. Install fossil by putting the fossil binary someplace on your $PATH.</p> <a name="fslclone"></a> <h2>General Work Flow</h2> <p>Fossil works with repository files (a database with the project's complete history) and with checked-out local trees (the working directory you use to do your work). The workflow looks like this:</p> |
︙ | ︙ | |||
32 33 34 35 36 37 38 | <p>The following sections will give you a brief overview of these operations.</p> <h2>Starting A New Project</h2> <p>To start a new project with fossil, create a new empty repository this way: ([/help/init | more info]) </p> | | | | | | | | | | | | | | | | | | | | 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 | <p>The following sections will give you a brief overview of these operations.</p> <h2>Starting A New Project</h2> <p>To start a new project with fossil, create a new empty repository this way: ([/help/init | more info]) </p> <blockquote> <b>fossil init </b><i> repository-filename</i> </blockquote> <h2>Cloning An Existing Repository</h2> <p>Most fossil operations interact with a repository that is on the local disk drive, not on a remote system. Hence, before accessing a remote repository it is necessary to make a local copy of that repository. Making a local copy of a remote repository is called "cloning".</p> <p>Clone a remote repository as follows: ([/help/clone | more info])</p> <blockquote> <b>fossil clone</b> <i>URL repository-filename</i> </blockquote> <p>The <i>URL</i> specifies the fossil repository you want to clone. The <i>repository-filename</i> is the new local filename into which the cloned repository will be written. For example: <blockquote> <b>fossil clone http://www.fossil-scm.org/ myclone.fossil</b> </blockquote> <p>If the remote repository requires a login, include a userid in the URL like this: <blockquote> <b>fossil clone http://</b><i>userid</i><b>@www.fossil-scm.org/ myclone.fossil</b> </blockquote> <p>You will be prompted separately for the password. Use "%HH" escapes for special characters in the userid. Examples: "%40" in place of "@" and "%2F" in place of "/". <p>If you are behind a restrictive firewall, you might need to <a href="#proxy">specify an HTTP proxy</a>.</p> <p>A Fossil repository is a single disk file. Instead of cloning, you can just make a copy of the repository file (for example, using "scp"). Note, however, that the repository file contains auxiliary information above and beyond the versioned files, including some sensitive information such as password hashes and email addresses. If you want to share Fossil repositories directly, consider running the [/help/scrub|fossil scrub] command to remove sensitive information before transmitting the file. <h2>Importing From Another Version Control System</h2> <p>Rather than start a new project, or clone an existing Fossil project, you might prefer to <a href="./inout.wiki">import an existing Git project</a> into Fossil using the [/help/import | fossil import] command. <h2>Checking Out A Local Tree</h2> <p>To work on a project in fossil, you need to check out a local copy of the source tree. Create the directory you want to be the root of your tree and cd into that directory. Then do this: ([/help/open | more info])</p> <blockquote> <b>fossil open </b><i> repository-filename</i> </blockquote> <p>This leaves you with the newest version of the tree checked out. From anywhere underneath the root of your local tree, you can type commands like the following to find out the status of your local tree:</p> <blockquote> <b>[/help/info | fossil info]</b><br> <b>[/help/status | fossil status]</b><br> <b>[/help/changes | fossil changes]</b><br> <b>[/help/diff | fossil diff]</b><br> <b>[/help/timeline | fossil timeline]</b><br> <b>[/help/ls | fossil ls]</b><br> <b>[/help/branch | fossil branch]</b><br> </blockquote> <p>Note that Fossil allows you to make multiple check-outs in separate directories from the same repository. This enables you, for example, to do builds from multiple branches or versions at the same time without having to generate extra clones.</p> <p>To switch a checkout between different versions and branches, use:</p> <blockquote> <b>[/help/update | fossil update]</b><br> <b>[/help/checkout | fossil checkout]</b><br> </blockquote> <p>[/help/update | update] honors the "autosync" option and does a "soft" switch, merging any local changes into the target version, whereas [/help/checkout | checkout] does not automatically sync and does a "hard" switch, overwriting local changes if told to do so.</p> <h2>Configuring Your Local Repository</h2> <p>When you create a new repository, either by cloning an existing project or create a new project of your own, you usually want to do some local configuration. This is easily accomplished using the web-server that is built into fossil. Start the fossil webserver like this: ([/help/ui | more info])</p> <blockquote> <b>fossil ui </b><i> repository-filename</i> </blockquote> <p>You can omit the <i>repository-filename</i> from the command above if you are inside a checked-out local tree.</p> <p>This starts a web server then automatically launches your web browser and makes it point to this web server. If your system has an unusual configuration, fossil might not be able to figure out how to start your web browser. In that case, first tell fossil where to find your web browser using a command like this:</p> <blockquote> <b>fossil setting web-browser </b><i> path-to-web-browser</i> </blockquote> <p>By default, fossil does not require a login for HTTP connections coming in from the IP loopback address 127.0.0.1. You can, and perhaps should, change this after you create a few users.</p> <p>When you are finished configuring, just press Control-C or use the <b>kill</b> command to shut down the mini-server.</p> <h2>Making Changes</h2> <p>To add new files to your project, or remove old files, use these commands:</p> |
︙ | ︙ | |||
192 193 194 195 196 197 198 | </blockquote> <p>You will be prompted for check-in comments using whatever editor is specified by your VISUAL or EDITOR environment variable.</p> In the default configuration, the [/help/commit|commit] command will also automatically [/help/push|push] your changes, but that | | | | | 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 | </blockquote> <p>You will be prompted for check-in comments using whatever editor is specified by your VISUAL or EDITOR environment variable.</p> In the default configuration, the [/help/commit|commit] command will also automatically [/help/push|push] your changes, but that feature can be disabled. (More information about [./concepts.wiki#workflow|autosync] and how to disable it.) Remember that your coworkers can not see your changes until you commit and push them.</p> <h2>Sharing Changes</h2> <p>When [./concepts.wiki#workflow|autosync] is turned off, the changes you [/help/commit | commit] are only on your local repository. To share those changes with other repositories, do:</p> <blockquote> <b>[/help/push | fossil push]</b> <i>URL</i> </blockquote> |
︙ | ︙ | |||
239 240 241 242 243 244 245 | date/time stamp. ([./checkin_names.wiki | more info]) If you omit the <i>VERSION</i>, then fossil moves you to the latest version of the branch your are currently on.</p> <p>The default behavior is for [./concepts.wiki#workflow|autosync] to be turned on. That means that a [/help/pull|pull] automatically occurs | | | 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 | date/time stamp. ([./checkin_names.wiki | more info]) If you omit the <i>VERSION</i>, then fossil moves you to the latest version of the branch your are currently on.</p> <p>The default behavior is for [./concepts.wiki#workflow|autosync] to be turned on. That means that a [/help/pull|pull] automatically occurs when you run [/help/update|update] and a [/help/push|push] happens automatically after you [/help/commit|commit]. So in normal practice, the push, pull, and sync commands are rarely used. But it is important to know about them, all the same.</p> <blockquote> <b>[/help/checkout | fossil checkout]</b> <i>VERSION</i> </blockquote> |
︙ | ︙ | |||
340 341 342 343 344 345 346 | <ul> <li>[./server.wiki#inetd|inetd/xinetd] <li>[./server.wiki#cgi|CGI] <li>[./server.wiki#scgi|SCGI] </ul> <p>The [./selfhost.wiki | self-hosting fossil repositories] use | | | 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 | <ul> <li>[./server.wiki#inetd|inetd/xinetd] <li>[./server.wiki#cgi|CGI] <li>[./server.wiki#scgi|SCGI] </ul> <p>The [./selfhost.wiki | self-hosting fossil repositories] use CGI. <a name="proxy"></a> <h2>HTTP Proxies</h2> <p>If you are behind a restrictive firewall that requires you to use an HTTP proxy to reach the internet, then you can configure the proxy in three different ways. You can tell fossil about your proxy using |
︙ | ︙ | |||
380 381 382 383 384 385 386 | </blockquote> <p>Or unset the environment variable. The fossil setting for the HTTP proxy takes precedence over the environment variable and the command-line option overrides both. If you have an persistent proxy setting that you want to override for a one-time sync, that is easily done on the command-line. For example, to sync with | | | 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 | </blockquote> <p>Or unset the environment variable. The fossil setting for the HTTP proxy takes precedence over the environment variable and the command-line option overrides both. If you have an persistent proxy setting that you want to override for a one-time sync, that is easily done on the command-line. For example, to sync with a co-workers repository on your LAN, you might type:</p> <blockquote> <b>fossil sync http://192.168.1.36:8080/ --proxy off</b> </blockquote> <h2>More Hints</h2> <p>A [/help | complete list of commands] is available, as is the [./hints.wiki|helpful hints] document. See the [./permutedindex.html#pindex|permuted index] for additional documentation. <p>Explore and have fun!</p> |
Changes to www/quotes.wiki.
1 2 3 4 5 6 7 8 9 | <title>What People Are Saying</title> The following are collected quotes from various forums and blogs about Fossil, Git, and DVCSes in general. This collection is put together by the creator of Fossil, so of course there is selection bias... <h2>On The Usability Of Git:</h2> <ol> | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 | <title>What People Are Saying</title> The following are collected quotes from various forums and blogs about Fossil, Git, and DVCSes in general. This collection is put together by the creator of Fossil, so of course there is selection bias... <h2>On The Usability Of Git:</h2> <ol> <li>Git approaches the usability of iptables, which is to say, utterly unusable unless you have the manpage tattooed on you arm. <blockquote> <i>by mml at [http://news.ycombinator.com/item?id=1433387]</i> </blockquote> <li><nowiki>It's simplest to think of the state of your [git] repository as a point in a high-dimensional "code-space", in which branches are represented as n-dimensional membranes, mapping the spatial loci of successive commits onto the projected manifold of each cloned repository.</nowiki> <blockquote> <i>At [http://tartley.com/?p=1267]</i> </blockquote> <li>Git is not a Prius. Git is a Model T. Its plumbing and wiring sticks out all over the place. You have to be a mechanic to operate it successfully or you'll be stuck on the side of the road when it breaks down. And it <b>will</b> break down. <blockquote> <i>Nick Farina at [http://nfarina.com/post/9868516270/git-is-simpler]</i> </blockquote> <li>Initial revision of "git", The information manager from hell <blockquote> <i>Linus Torvalds - 2005-04-07 22:13:13<br> Commit comment on the very first source-code check-in for git </blockquote> <li>I've been experimenting a lot with git at work. Damn, it's complicated. It has things to trip you up with that sane people just wouldn't ever both with including the ability to allow you to commit stuff in such a way that you can't find it again afterwards (!!!) Demented workflow complexity on acid? <p>* dkf really wishes he could use fossil instead</p> <blockquote> |
︙ | ︙ | |||
102 103 104 105 106 107 108 | I'm glad to be able to replace Git in every place that I possibly can with Fossil. <blockquote> <i>Joe Prostko at [http://www.mail-archive.com/fossil-users@lists.fossil-scm.org/msg16716.html] </blockquote> | | | | | | | | | | | 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 | I'm glad to be able to replace Git in every place that I possibly can with Fossil. <blockquote> <i>Joe Prostko at [http://www.mail-archive.com/fossil-users@lists.fossil-scm.org/msg16716.html] </blockquote> <li>Fossil is awesome!!! I have never seen an app like that before, such simplicity and flexibility!!! <blockquote> <i>zengr at [http://stackoverflow.com/questions/138621/best-version-control-for-lone-developer]</i> </blockquote> <li>This is my favourite VCS. I can carry it on a USB. And it's a complete system, with it's own server, ticketing system, Wiki pages, and a very, very helpful timeline visualization. And the entire program in a single file! <blockquote> <i>thunderbong commenting on hacker news: [https://news.ycombinator.com/item?id=9131619]</i> </blockquote> </ol> <h2>On Git Versus Fossil</h2> <ol> <li value=15> Just want to say thanks for fossil making my life easier.... Also <nowiki>[for]</nowiki> not having a misanthropic command line interface. <blockquote> <i>Joshua Paine at [http://www.mail-archive.com/fossil-users@lists.fossil-scm.org/msg02736.html]</i> </blockquote> <li>We use it at a large university to manage code that small teams write. The runs everywhere, ease of installation and portability is something that seems to be a good fit with the environment we have (highly ditrobuted, sometimes very restrictive firewalls, OSX/Win/Linux). We are happy with it and teaching a Msc/Phd student (read complete novice) fossil has just been a smoother ride than Git was. <blockquote> <i>viablepanic at [http://www.reddit.com/r/programming/comments/bxcto/why_not_fossil_scm/]</i> </blockquote> <li>In the fossil community - and hence in fossil itself - development history is pretty much sacrosanct. The very name "fossil" was to chosen to reflect the unchanging nature of things in that history. <p>In git (or rather, the git community), the development history is part of the published aspect of the project, so it provides tools for rearranging that history so you can present what you "should" have done rather than what you actually did. |
︙ | ︙ |
Changes to www/reviews.wiki.
1 2 | <title>Reviews</title> <b>External links:</b> | | | 1 2 3 4 5 6 7 8 9 10 | <title>Reviews</title> <b>External links:</b> * [http://nixtu.blogspot.com/2010/03/fossil-dvcs-on-go-first-impressions.html | Fossil DVCS on the Go - First Impressions] * [http://blog.mired.org/2011/02/fossil-sweet-spot-in-vcs-space.html | Fossil - a sweet spot in the VCS space] by Mike Meyer. * [http://blog.s11n.net/?p=72|Four reasons to take a closer look at the Fossil SCM] by Stephan Beal <b>See Also:</b> |
︙ | ︙ | |||
20 21 22 23 24 25 26 | single .exe applications! </blockquote> <b>Joshua Paine on 2010-10-22:</b> <blockquote> | | | | | | | | | | | | | 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 | single .exe applications! </blockquote> <b>Joshua Paine on 2010-10-22:</b> <blockquote> With one of my several hats on, I'm in a small team using git. Another team member just checked some stuff into trunk that should have been on a branch. Nothing else had happened since, so in fossil I would have just edited that commit and put it on a new branch. In git that can't actually be done without danger once other people have pulled, so I had to create a new commit rolling back the changes, then branch and cherry pick the earlier changes, then figure out how to make my new branch shared instead of private. Just want to say thanks for fossil making my life easier on most of my projects, and being able to move commits to another branch after the fact and shared-by-default branches are good features. Also not having a misanthropic command line interface. </blockquote> <b>Stephan Beal writes on 2009-01-11:</b> <blockquote> Sometime in late 2007 I came across a link to fossil on <a href="http://www.sqlite.org/">sqlite.org</a>. It was a good thing I bookmarked it, because I was never able to find the link again (it might have been in a bug report or something). The reasons I first took a close look at it were (A) it stemmed from the sqlite project, which I've held in high regards for years (e.g. I wrote JavaScript bindings for it: <a href="http://spiderape.sourceforge.net/plugins/sqlite/"> |
︙ | ︙ |
Changes to www/selfcheck.wiki.
︙ | ︙ | |||
12 13 14 15 16 17 18 | years now. Many bugs have been encountered. But, thanks in large part to the defensive measures described here, no data has been lost. The integrity checks are doing their job well.</p> <h2>Atomic Check-ins With Rollback</h2> The fossil repository is stored in an | | | 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | years now. Many bugs have been encountered. But, thanks in large part to the defensive measures described here, no data has been lost. The integrity checks are doing their job well.</p> <h2>Atomic Check-ins With Rollback</h2> The fossil repository is stored in an <a href="http://www.sqlite.org/">SQLite</a> database file. ([./tech_overview.wiki | Addition information] about the repository file format.) SQLite is very mature and stable and has been in wide-spread use for many years, so we are confident it will not cause repository corruption. SQLite databases do not corrupt even if a program or system crash or power failure occurs in the middle of the update. If some kind of crash |
︙ | ︙ | |||
59 60 61 62 63 64 65 | the SHA1 checksum again, and verifies that the checksums match. If anything does not match up, an error message is printed and the transaction rolls back. So, in other words, fossil always checks to make sure it can re-extract a file before it commits a change to that file. Hence bugs in fossil are unlikely to corrupt the repository in | | | 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 | the SHA1 checksum again, and verifies that the checksums match. If anything does not match up, an error message is printed and the transaction rolls back. So, in other words, fossil always checks to make sure it can re-extract a file before it commits a change to that file. Hence bugs in fossil are unlikely to corrupt the repository in a way that prevents us from extracting historical versions of files. <h2>Checksum Over All Files In A Check-in</h2> Manifest artifacts that define a check-in have two fields (the R-card and Z-card) that record MD5 hashes of the manifest itself and of all other files in the manifest. Prior to any check-in |
︙ | ︙ | |||
100 101 102 103 104 105 106 | doing all of the checksumming and verification outlined above. Fossil takes the philosophy of the <a href="http://en.wikipedia.org/wiki/The_Tortoise_and_the_Hare">tortoise</a>: reliability is more important than raw speed. The developers of fossil see no merit in getting the wrong answer quickly. Fossil may not be the fastest versioning system, but it is "fast enough". | | | 100 101 102 103 104 105 106 107 108 | doing all of the checksumming and verification outlined above. Fossil takes the philosophy of the <a href="http://en.wikipedia.org/wiki/The_Tortoise_and_the_Hare">tortoise</a>: reliability is more important than raw speed. The developers of fossil see no merit in getting the wrong answer quickly. Fossil may not be the fastest versioning system, but it is "fast enough". Fossil runs quickly enough to stay out of the developers way. Most operations complete in under a second. |
Changes to www/selfhost.wiki.
1 2 3 4 5 6 7 8 9 10 11 | <title>Fossil Self-Hosting Repositories</title> Fossil has self-hosted since 2007-07-21. As of this writing (2009-08-24) there are three publicly accessible repositories for the Fossil source code: 1. [http://www.fossil-scm.org/] 2. [http://www2.fossil-scm.org/] 3. [http://www3.fossil-scm.org/site.cgi] The canonical repository is (1). Repositories (2) and (3) automatically | | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 | <title>Fossil Self-Hosting Repositories</title> Fossil has self-hosted since 2007-07-21. As of this writing (2009-08-24) there are three publicly accessible repositories for the Fossil source code: 1. [http://www.fossil-scm.org/] 2. [http://www2.fossil-scm.org/] 3. [http://www3.fossil-scm.org/site.cgi] The canonical repository is (1). Repositories (2) and (3) automatically stay in synchronization with (1) via a <a href="http://en.wikipedia.org/wiki/Cron">cron job</a> that invokes "fossil sync" at regular intervals. Note that the two secondary repositories are more than just read-only mirrors. All three servers support full read/write capabilities. Changes (such as new tickets or wiki or check-ins) can be implemented on any of the three servers and those changes automatically propagate to the other two servers. Server (1) runs as a CGI script on a <a href="http://www.linode.com/">Linode 1024</a> located in Dallas, TX - on the same virtual machine that hosts <a href="http://www.sqlite.org/">SQLite</a> and over a dozen other smaller projects. This demonstrates that Fossil can run on a low-power host processor. Multiple fossil-based projects can easily be hosted on the same machine, even if that machine is itself one of several dozen virtual machines on single physical box. The CGI script that runs the canonical Fossil self-hosting repository is as follows: <blockquote><pre> #!/usr/bin/fossil repository: /fossil/fossil.fossil </pre></blockquote> Server (3) runs as a CGI script on a shared hosting account at <a href="http://www.he.net/">Hurricane Electric</a> in Fremont, CA. This server demonstrates the ability of Fossil to run on an economical shared-host web account with no privileges beyond port 80 HTTP access and CGI. It is not necessary to have a dedicated computer with administrator privileges to run Fossil. As far as we are aware, Fossil is the only full-featured configuration management system that can run in such a restricted environment. The CGI script that runs on the Hurricane Electric server is the same as the CGI script shown above, except that the pathnames are modified to suit the environment: <blockquote><pre> #!/home/hwaci/bin/fossil |
︙ | ︙ |
Changes to www/server.wiki.
1 2 3 4 5 | <title>How To Configure A Fossil Server</title> <h2>Introduction</h2><blockquote> <p>A server is not necessary to use Fossil, but a server does help in collaborating with peers. A Fossil server also works well as a complete website for a project. For example, the complete [https://www.fossil-scm.org/] website, including the | | < | | > > > > > | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 | <title>How To Configure A Fossil Server</title> <h2>Introduction</h2><blockquote> <p>A server is not necessary to use Fossil, but a server does help in collaborating with peers. A Fossil server also works well as a complete website for a project. For example, the complete [https://www.fossil-scm.org/] website, including the page you are now reading, is just a Fossil server displaying the content of the self-hosting repository for Fossil.</p> <p>This article is a guide for setting up your own Fossil server. <p>See "[./aboutcgi.wiki|How CGI Works In Fossil]" for background information on the underlying CGI technology. See "[./sync.wiki|The Fossil Sync Protocol]" for information on the wire protocol used for client/server communication.</p> </blockquote> <h2>Overview</h2><blockquote> There are basically four ways to set up a Fossil server: <ol> <li>A stand-alone server <li>Using inetd or xinetd or stunnel <li>CGI <li>SCGI (a.k.a. SimpleCGI) </ol> Each of these can serve either a single repository, or a directory hierarchy containing many repositories with names ending in ".fossil". </blockquote> <a name="standalone"></a> <h2>Standalone server</h2><blockquote> The easiest way to set up a Fossil server is to use either the [/help/server|server] or the [/help/ui|ui] commands: <ul> <li><b>fossil server</b> <i>REPOSITORY</i> <li><b>fossil ui</b> <i>REPOSITORY</i> </ul> <p> The <i>REPOSITORY</i> argument is either the name of the repository file, or a directory containing many repositories. Both of these commands start a Fossil server, usually on TCP port 8080, though a higher numbered port might also be used if 8080 is already occupied. You can access these using URLs of the form <b>http://localhost:8080/</b>, or if <i>REPOSITORY</i> is a directory, URLs of the form <b>http://localhost:8080/</b><i>repo</i><b>/</b> where <i>repo</i> is the base name of the repository file without the ".fossil" suffix. The difference between "ui" and "server" is that "ui" will also start a web browser and point it to the URL mentioned above, and the "ui" command binds to the loopback IP address (127.0.0.1) only so that the "ui" command cannot be used to serve content to a different machine. </p> <p> If one of the commands above is run from within an open checkout, then the <i>REPOSITORY</i> argument can be omitted and the checkout is used as the repository. |
︙ | ︙ | |||
69 70 71 72 73 74 75 | program with the arguments shown. Obviously you will need to modify the pathnames for your particular setup. The final argument is either the name of the fossil repository to be served, or a directory containing multiple repositories. </p> <p> | | | | | | | 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 | program with the arguments shown. Obviously you will need to modify the pathnames for your particular setup. The final argument is either the name of the fossil repository to be served, or a directory containing multiple repositories. </p> <p> If you use a non-standard TCP port on systems where the port-specification must be a symbolic name and cannot be numeric, add the desired name and port to /etc/services. For example, if you want your Fossil server running on TCP port 12345 instead of 80, you will need to add: <blockquote> <pre> fossil 12345/tcp #fossil server </pre> </blockquote> and use the symbolic name ('fossil' in this example) instead of the numeral ('12345') in inetd.conf. For details, see the relevant section in your system's documentation, e.g. the [https://www.freebsd.org/doc/en/books/handbook/network-inetd.html|FreeBSD Handbook] in case you use FreeBSD. </p> <p> If your system is running xinetd, then the configuration is likely to be in the file "/etc/xinetd.conf" or in a subfile of "/etc/xinetd.d". An xinetd configuration file will appear like this:</p> <blockquote> |
︙ | ︙ | |||
113 114 115 116 117 118 119 | In both cases notice that Fossil was launched as root. This is not required, but if it is done, then Fossil will automatically put itself into a chroot jail for the user who owns the fossil repository before reading any information off of the wire. </p> <p> Inetd or xinetd must be enabled, and must be (re)started whenever their configuration | | | | | | 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 | In both cases notice that Fossil was launched as root. This is not required, but if it is done, then Fossil will automatically put itself into a chroot jail for the user who owns the fossil repository before reading any information off of the wire. </p> <p> Inetd or xinetd must be enabled, and must be (re)started whenever their configuration changes - consult your system's documentation for details. </p> <p> [https://www.stunnel.org/ | Stunnel version 5] is an inetd-like process that accepts and decodes SSL-encrypted connections. Fossil can be run directly from stunnel in a manner similar to inetd and xinetd. This can be used to provide a secure link to a Fossil project. The configuration needed to get stunnel5 to invoke Fossil is very similar to the inetd and xinetd examples shown above. The relevant parts of an stunnel configuration might look something like the following: <blockquote><pre><nowiki> [https] accept = www.ubercool-project.org:443 TIMEOUTclose = 0 exec = /usr/bin/fossil execargs = /usr/bin/fossil http /home/fossil/ubercool.fossil --https </nowiki></pre></blockquote> See the stunnel5 documentation for further details about the /etc/stunnel/stunnel.conf configuration file. Note that the [/help/http|fossil http] command should include the --https option to let Fossil know to use "https" instead of "http" as the scheme on generated hyperlinks. <p> Using inetd or xinetd or stunnel is a more complex setup than the "standalone" server, but it has the advantage of only using system resources when an actual connection is attempted. If no-one ever connects to that port, a Fossil server will not (automatically) run. It has the disadvantage of requiring "root" access and therefore may not normally be available to lower-priced "shared" servers on the internet. </p> </blockquote> <a name="cgi"></a> <h2>Fossil as CGI</h2><blockquote> <p> A Fossil server can also be run from an ordinary web server as a CGI program. This feature allows Fossil to be seamlessly integrated into a larger website. CGI is how the [./selfhost.wiki | self-hosting fossil repositories] are implemented. </p> <p> To run Fossil as CGI, create a CGI script (here called "repo") in the CGI directory of your web server and having content like this: <blockquote><pre> #!/usr/bin/fossil |
︙ | ︙ | |||
178 179 180 181 182 183 184 | must be readable by the process which executes the CGI.</li> <li>ALL directories leading to the CGI script must also be readable and the CGI script itself must be executable for the user under which it will run (which often differs from the one running the web server - consult your site's documentation or administrator).</li> <li>The repository file AND the directory containing it must be writable by the same account which executes the Fossil binary (again, this might differ from the WWW user). The directory needs to be writable so that sqlite can write its journal files.</li> | | | | 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 | must be readable by the process which executes the CGI.</li> <li>ALL directories leading to the CGI script must also be readable and the CGI script itself must be executable for the user under which it will run (which often differs from the one running the web server - consult your site's documentation or administrator).</li> <li>The repository file AND the directory containing it must be writable by the same account which executes the Fossil binary (again, this might differ from the WWW user). The directory needs to be writable so that sqlite can write its journal files.</li> <li>Fossil must be able to create temporary files, the default directory for which depends on the OS. When the CGI process is operating within a chroot, ensure that this directory exists and is readable/writeable by the user who executes the Fossil binary.</li> </ul> </p> <p> Once the script is set up correctly, and assuming your server is also set |
︙ | ︙ | |||
213 214 215 216 217 218 219 | </p> </blockquote> <a name="scgi"></a> <h2>Fossil as SCGI</h2><blockquote> <p> | | | 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 | </p> </blockquote> <a name="scgi"></a> <h2>Fossil as SCGI</h2><blockquote> <p> The [/help/server|fossil server] command, described above as a way of starting a stand-alone web server, can also be used for SCGI. Simply add the --scgi command-line option and the stand-alone server will interpret and respond to the SimpleCGI or SCGI protocol rather than raw HTTP. This can be used in combination with a webserver (such as [http://nginx.org|Nginx]) that does not support CGI. A typical Nginx configuration to support SCGI with Fossil would look something like this: <blockquote><pre> |
︙ | ︙ | |||
278 279 280 281 282 283 284 | For more information, see <a href="./ssl.wiki">Using SSL with Fossil</a>. </p> </blockquote> <a name="loadmgmt"></a> <h2>Managing Server Load</h2><blockquote> <p> | | | | | | | 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 | For more information, see <a href="./ssl.wiki">Using SSL with Fossil</a>. </p> </blockquote> <a name="loadmgmt"></a> <h2>Managing Server Load</h2><blockquote> <p> A Fossil server is very efficient and normally presents a very light load on the server. The Fossil [./selfhost.wiki | self-hosting server] is a 1/24th slice VM at [http://www.linode.com | Linode.com] hosting 65 other repositories in addition to Fossil (and including some very high-traffic sites such as [http://www.sqlite.org] and [http://system.data.sqlite.org]) and it has a typical load of 0.05 to 0.1. A single HTTP request to Fossil normally takes less than 10 milliseconds of CPU time to complete. So requests can be arriving at a continuous rate of 20 or more per second and the CPU can still be mostly idle. <p> However, there are some Fossil web pages that can consume large amounts of CPU time, especially on repositories with a large number of files or with long revision histories. High CPU usage pages include [/help?cmd=/zip | /zip], [/help?cmd=/tarball | /tarball], [/help?cmd=/annotate | /annotate] and others. On very large repositories, these commands can take 15 seconds or more of CPU time. If these kinds of requests arrive too quickly, the load average on the server can grow dramatically, making the server unresponsive. <p> Fossil provides two capabilities to help avoid server overload problems due to excessive requests to expensive pages: <ol> <li><p>An optional cache is available that remembers the 10 most recently requested /zip or /tarball pages and returns the precomputed answer if the same page is requested again. <li><p>Page requests can be configured to fail with a [http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.3 | "503 Server Overload"] HTTP error if an expensive request is received while the host load average is too high. </ol> Both of these load-control mechanisms are turned off by default, but they are recommended for high-traffic sites. <p> The webpage cache is activated using the [/help?cmd=cache|fossil cache init] command-line on the server. Add a -R option to specify the specific repository |
︙ | ︙ | |||
344 345 346 347 348 349 350 | systems that support the "getloadavg()" API. Most modern Unix systems have this interface, but Windows does not, so the feature will not work on Windows. Note also that Linux implements "getloadavg()" by accessing the "/proc/loadavg" file in the "proc" virtual filesystem. If you are running a Fossil instance inside a chroot() jail on Linux, you will need to make the "/proc" file system available inside that jail in order for this feature to work. On the self-hosting Fossil repository, this was accomplished by adding a line | | | | > | 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 | systems that support the "getloadavg()" API. Most modern Unix systems have this interface, but Windows does not, so the feature will not work on Windows. Note also that Linux implements "getloadavg()" by accessing the "/proc/loadavg" file in the "proc" virtual filesystem. If you are running a Fossil instance inside a chroot() jail on Linux, you will need to make the "/proc" file system available inside that jail in order for this feature to work. On the self-hosting Fossil repository, this was accomplished by adding a line to the "/etc/fstab" file that looks like: <blockquote><pre> chroot_jail_proc /home/www/proc proc ro 0 0 </pre></blockquote> The /home/www/proc pathname should be adjusted so that the "/proc" component is in the root of the chroot jail, of course. <p> To see if the load-average limiter is functional, visit the [/test_env] page of the server to view the current load average. If the value for the load average is greater than zero, that means that it is possible to activate the load-average limiter on that repository. If the load average shows exactly "0.0", then that means that Fossil is unable to find the load average (either because it is in a chroot() jail without /proc access, or because it is running on a system that does not support "getloadavg()") and so the load-average limiter will not function. </blockquote> |
Changes to www/settings.wiki.
1 2 3 4 5 6 7 8 | <title>Fossil Settings</title> <h2>Using Fossil Settings</h2> Settings control the behaviour of fossil. They are set with the <tt>fossil settings</tt> command, or through the web interface in the Settings page in the Admin section. | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 | <title>Fossil Settings</title> <h2>Using Fossil Settings</h2> Settings control the behaviour of fossil. They are set with the <tt>fossil settings</tt> command, or through the web interface in the Settings page in the Admin section. For a list of all settings, view the Settings page, or type <tt>fossil help settings</tt> from the command line. <h3>Repository settings</h3> Settings are set on a per-repository basis. When you clone a repository, a subset of settings are copied to your local repository. If you make a change to a setting on your local repository, it is not synced back to the server when you <tt>push</tt> or <tt>sync</tt>. If you make a change on the server, you need to manually make the change on all repositories which are cloned from this repository. You can also set a setting globally on your local machine. The value will be used for all repositories cloned to your machine, unless overridden explicitly in a particular repository. Global settings can be set by using the <tt>-global</tt> option on the <tt>fossil settings</tt> command. <h3>"Versionable" settings</h3> Most of the settings control the behaviour of fossil on your local machine, largely acting to reflect your preference on how you want to use Fossil, how you communicate with the server, or options for hosting a repository on the web. |
︙ | ︙ |
Changes to www/shunning.wiki.
1 2 3 4 5 6 | <title>Deleting Content From Fossil</title> <h1 align="center">Deleting Content From Fossil</h1> Fossil is designed to keep all historical content forever. Users of Fossil are discouraged from "deleting" content simply because it has become obsolete. Old content is part of the historical record | | | | | | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 | <title>Deleting Content From Fossil</title> <h1 align="center">Deleting Content From Fossil</h1> Fossil is designed to keep all historical content forever. Users of Fossil are discouraged from "deleting" content simply because it has become obsolete. Old content is part of the historical record (part of the "fossil record") and should be maintained indefinitely. Such is the design intent of Fossil. Nevertheless, there may occasionally arise legitimate reasons for deleting content. Such reasons might include: * Spammers have inserted inappropriate content into a wiki page or ticket that needs to be removed. * A file that contains trade secrets or that is under copyright may have been accidentally committed and needs to be backed out. * A malformed control artifact may have been inserted and is disrupting the operation of Fossil. <h2>Shunning</h2> Fossil provides a mechanism called "shunning" for removing content from a repository. Every Fossil repository maintains a list of the SHA1 hash names of "shunned" artifacts. Fossil will refuse to push or pull any shunned artifact. Furthermore, all shunned artifacts (but not the shunning list itself) are removed from the repository whenever the repository is reconstructed using the "rebuild" command. <h3>Shunning lists are local state</h3> The shunning list is part of the local state of a Fossil repository. In other words, shunning does not propagate to a remote repository using the normal "sync" mechanism. An artifact can be shunned from one repository but be allowed to exist in another. The fact that the shunning list does not propagate is a security feature. If the shunning list propagated then a malicious user (or a bug in the fossil code) might introduce a shun record that would propagate through all repositories in a network and permanently destroy vital information. By refusing to propagate the shunning list, Fossil ensures that no remote user will ever be able to remove information from your personal repositories without your permission. The shunning list does not propagate to a remote repository by the normal "sync" mechanism, but it is still possible to copy shuns from one repository to another using the "configuration" command: <b>fossil configuration pull shun</b> <i>remote-url</i><br> <b>fossil configuration push shun</b> <i>remote-url</i> The two command above will pull or push shunning lists from or to the <i>remote-url</i> indicated and merge the lists on the receiving end. "Admin" privilege on the remote server is required in order to push a shun list. In contrast, the shunning list will be automatically received by default as part of a normal client "pull" operation unless disabled by the "<tt>auto-shun</tt>" setting. Note that the shunning list remains in the repository even after the shunned artifact has been removed. This is to prevent the artifact from being reintroduced into the repository the next time it syncs with another repository that has not shunned the artifact. <h3>Managing the shunning list</h3> The complete shunning list for a repository can be viewed by a user with "admin" privilege on the "/shun" URL of the web interface to Fossil. That URL is accessible under the "Admin" button on the default menu bar. Items can be added to or removed from the shunning list. "Sync" operations are inhibited as soon as the artifact is added to the shunning list, but the content of the artifact is not actually removed from the repository until the next time the repository is rebuilt. When viewing individual artifacts with the web interface, "admin" users will usually see a "Shun" option in the submenu that will take them directly to the shunning page and enable that artifact to be shunned with a single additional mouse click. |
Changes to www/stats.wiki.
1 2 3 | <title>Fossil Performance</title> <h1 align="center">Performance Statistics</h1> | | | 1 2 3 4 5 6 7 8 9 10 11 | <title>Fossil Performance</title> <h1 align="center">Performance Statistics</h1> The questions will inevitably arise: How does Fossil perform? Does it use a lot of disk space or bandwidth? Is it scalable? In an attempt to answers these questions, this report looks at several projects that use fossil for configuration management and examines how well they are working. The following table is a summary of the results. (Last updated on 2015-02-28.) Explanation and analysis follows the table. |
︙ | ︙ | |||
94 95 96 97 98 99 100 | In Fossil, every version of every file, every wiki page, every change to every ticket, and every check-in is a separate "artifact". One way to think of a Fossil project is as a bag of artifacts. Of course, there is a lot more than this going on in Fossil. Many of the artifacts have meaning and are related to other artifacts. But at a low level (for example when synchronizing two instances of the same project) the only thing that matters | | | | 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 | In Fossil, every version of every file, every wiki page, every change to every ticket, and every check-in is a separate "artifact". One way to think of a Fossil project is as a bag of artifacts. Of course, there is a lot more than this going on in Fossil. Many of the artifacts have meaning and are related to other artifacts. But at a low level (for example when synchronizing two instances of the same project) the only thing that matters is the unordered collection of artifacts. In fact, one of the key characteristics of Fossil is that the entire project history can be reconstructed simply by scanning the artifacts in an arbitrary order. The number of check-ins is the number of times that the "commit" command has been run. A single check-in might change a 3 or 4 files, or it might change dozens or hundreds of files. Regardless of the number of files changed, it still only counts as one check-in. The "Uncompressed Size" is the total size of all the artifacts within the repository assuming they were all uncompressed and stored separately on the disk. Fossil makes use of delta compression between related versions of the same file, and then uses zlib compression on the resulting deltas. The total resulting repository size is shown after the uncompressed size. On the right end of the table, we show the "Clone Bandwidth". This is the total number of bytes sent from server back to the client. The number of |
︙ | ︙ |
Changes to www/sync.wiki.
1 2 | <title>The Fossil Sync Protocol</title> | < < < | < | | | | | | > | | | > > | | | > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 | <title>The Fossil Sync Protocol</title> <p>This document describes the wire protocol used to synchronize content between two Fossil repositories.</p> <h2>1.0 Overview</h2> <p>The global state of a fossil repository consists of an unordered collection of artifacts. Each artifact is identified by its SHA1 hash expressed as a 40-character lower-case hexadecimal string. Synchronization is the process of sharing artifacts between servers so that all servers have copies of all artifacts. Because artifacts are unordered, the order in which artifacts are received at a server is inconsequential. It is assumed that the SHA1 hashes of artifacts are unique - that every artifact has a different SHA1 hash. To a first approximation, synchronization proceeds by sharing lists SHA1 hashes of available artifacts, then sharing those artifacts that are not found on one side or the other of the connection. In practice, a repository might contain millions of artifacts. The list of SHA1 hashes for this many artifacts can be large. So optimizations are employed that usually reduce the number of SHA1 hashes that need to be shared to a few hundred.</p> <p>Each repository also has local state. The local state determines the web-page formatting preferences, authorized users, ticket formats, and similar information that varies from one repository to another. The local state is not using transferred during a sync. Except, some local state is transferred during a [/help?cmd=clone|clone] in order to initialize the local state of the new repository. And the [/help?cmd=configuration|config push] and [/help?cmd=configuration|config pull] commands can be an administrator to sync local state.</p> <h2>2.0 Transport</h2> <p>All communication between client and server is via HTTP requests. The server is listening for incoming HTTP requests. The client issues one or more HTTP requests and receives replies for each request.</p> <p>The server might be running as an independent server using the <b>server</b> command, or it might be launched from inetd or xinetd using the <b>http</b> command. Or the server might be launched from CGI. (See "[./server.wiki|How To Configure A Fossil Server]" for details.) The specifics of how the server listens for incoming HTTP requests is immaterial to this protocol. The important point is that the server is listening for requests and the client is the issuer of the requests.</p> <p>A single push, pull, or sync might involve multiple HTTP requests. The client maintains state between all requests. But on the server side, each request is independent. The server does not preserve any information about the client from one request to the next.</p> <h4>2.0.1 Encrypted Transport</h4> <p>In the current implementation of Fossil, the server only understands HTTP requests. The client can send either clear-text HTTP requests or encrypted HTTPS requests. But when HTTPS requests are sent, they first must be decrypted by a webserver or proxy before being passed to the Fossil server. This limitation may be relaxed in a future release.</p> <h3>2.1 Server Identification</h3> <p>The server is identified by a URL argument that accompanies the push, pull, or sync command on the client. (As a convenience to users, the URL can be omitted on the client command and the same URL from the most recent push, pull, or sync will be reused. This saves typing in the common case where the client does multiple syncs to |
︙ | ︙ | |||
148 149 150 151 152 153 154 | from the server. The nonce is the SHA1 hash of the remainder of the message - all text that follows the newline character that terminates the login card. The signature is the SHA1 hash of the concatenation of the nonce and the users password.</p> <p>For each login card, the server looks up the user and verifies that the nonce matches the SHA1 hash of the remainder of the | | | | > | > | > | | | 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 | from the server. The nonce is the SHA1 hash of the remainder of the message - all text that follows the newline character that terminates the login card. The signature is the SHA1 hash of the concatenation of the nonce and the users password.</p> <p>For each login card, the server looks up the user and verifies that the nonce matches the SHA1 hash of the remainder of the message. It then checks the signature hash to make sure the signature matches. If everything checks out, then the client is granted all privileges of the specified user.</p> <p>Privileges are cumulative. There can be multiple successful login cards. The session privileges are the bit-wise OR of the privileges of each individual login.</p> <h3>3.3 File Cards</h3> <p>Artifacts are transferred using either "file" cards, or "cfile" or "uvfile" cards. The name "file" card comes from the fact that most artifacts correspond to files that are under version control. The "cfile" name is an abbreviation for "compressed file". The "uvfile" name is an abbreviation for "unversioned file". </p> <h4>3.3.1 Ordinary File Cards</h4> <p>For sync protocols, artifacts are transferred using "file" cards. File cards come in two different formats depending on whether the artifact is sent directly or as a delta from some other artifact.</p> <blockquote> <b>file</b> <i>artifact-id size</i> <b>\n</b> <i>content</i><br> <b>file</b> <i>artifact-id delta-artifact-id size</i> <b>\n</b> <i>content</i> </blockquote> <p>File cards are different from most other cards in that they are followed by in-line "payload" data. The content of the artifact or the artifact delta consists of the first <i>size</i> bytes of the x-fossil content that immediately follow the newline that terminates the file card. Only file and cfile cards have this characteristic. </p> <p>The first argument of a file card is the ID of the artifact that |
︙ | ︙ | |||
207 208 209 210 211 212 213 | <p>A client that sends a clone protocol version "3" or greater will receive artifacts as "cfile" cards while cloning. This card was introduced to improve the speed of the transfer of content by sending the compressed artifact directly from the server database to the client.</p> <p>Compressed File cards are similar to File cards, sharing the same in-line "payload" data characteristics and also the same treatment of | | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 | <p>A client that sends a clone protocol version "3" or greater will receive artifacts as "cfile" cards while cloning. This card was introduced to improve the speed of the transfer of content by sending the compressed artifact directly from the server database to the client.</p> <p>Compressed File cards are similar to File cards, sharing the same in-line "payload" data characteristics and also the same treatment of direct content or delta content. Cfile cards come in two different formats depending on whether the artifact is sent directly or as a delta from some other artifact.</p> <blockquote> <b>cfile</b> <i>artifact-id usize csize</i> <b>\n</b> <i>content</i><br> <b>cfile</b> <i>artifact-id delta-artifact-id usize csize</i> <b>\n</b> <i>content</i><br> </blockquote> <p>The first argument of the cfile card is the ID of the artifact that is being transferred. The artifact ID is the lower-case hexadecimal representation of the SHA1 hash of the artifact. The second argument of the cfile card is the original size in bytes of the artifact. The last argument of the cfile card is the number of compressed bytes of payload that immediately follow the cfile card. If the cfile card has only three arguments, that means the payload is the complete content of the artifact. If the cfile card has four arguments, then the payload is a delta and the second argument is the ID of another artifact that is the source of the delta and the third argument is the original size of the delta artifact.</p> <p>Unlike file cards, cfile cards are only sent in one direction during a clone from server to client for clone protocol version "3" or greater.</p> <h4>3.3.3 Private artifacts</h4> <p>"Private" content consist of artifacts that are not normally synced. However, private content will be synced when the the [/help?cmd=sync|fossil sync] command includes the "--private" option. </p> <p>Private content is marked by a "private" card: <blockquote> <b>private</b> </blockquote> <p>The private card has no arguments and must directly precede a file card that contains the private content.</p> <h4>3.3.4 Unversioned File Cards</h4> <p>Unversioned content is sent in both directions (client to server and server to client) using "uvfile" cards in the following format: <blockquote> <b>uvfile</b> <i>name mtime hash size flags</i> <b>\n</b> <i>content</i> </blockquote> <p>The <i>name</i> field is the name of the unversioned file. The <i>mtime</i> is the last modification time of the file in seconds since 1970. The <i>hash</i> field is the SHA1 hash of the content for the unversioned file, or "<b>-</b>" for deleted content. The <i>size</i> field is the (uncompressed) size of the content in bytes. The <i>flags</i> field is an integer which is interpreted as an array of bits. The 0x0004 bit of <i>flags</i> indicates that the <i>content</i> is to be omitted. The content might be omitted if it is too large to transmit, or if the sender merely wants to update the modification time of the file without changing the files content. The <i>content</i> is the (uncompressed) content of the file. <p>The receiver should only accept the uvfile card if the hash and size match the content and if the mtime is newer than any existing instance of the same file held by the receiver. The sender will not normally transmit a uvfile card unless all these constraints are true, but the receiver should double-check. <p>A server should only accept uvfile cards if the login user has the "y" write-unversioned permission. <p>Servers send uvfile cards in response to uvgimme cards received from the client. Clients send uvfile cards when they determine that the server needs the content based on uvigot cards previously received from the server. <h3>3.4 Push and Pull Cards</h3> <p>Among the first cards in a client-to-server message are the push and pull cards. The push card tells the server that the client is pushing content. The pull card tells the server that the client wants to pull content. In the event of a sync, both cards are sent. The format is as follows:</p> |
︙ | ︙ | |||
253 254 255 256 257 258 259 260 261 262 263 264 265 266 | of the software project that the client repository contains. The projectcode for the client and server must match in order for the transaction to proceed.</p> <p>The server will also send a push card back to the client during a clone. This is how the client determines what project code to put in the new repository it is constructing.</p> <h3>3.5 Clone Cards</h3> <p>A clone card works like a pull card in that it is sent from client to server in order to tell the server that the client wants to pull content. The clone card comes in two formats. Older clients use the no-argument format and newer clients use the | > > | 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 | of the software project that the client repository contains. The projectcode for the client and server must match in order for the transaction to proceed.</p> <p>The server will also send a push card back to the client during a clone. This is how the client determines what project code to put in the new repository it is constructing.</p> <p>The <i>servercode</i> argument is currently unused. <h3>3.5 Clone Cards</h3> <p>A clone card works like a pull card in that it is sent from client to server in order to tell the server that the client wants to pull content. The clone card comes in two formats. Older clients use the no-argument format and newer clients use the |
︙ | ︙ | |||
318 319 320 321 322 323 324 | <h3>3.6 Igot Cards</h3> <p>An igot card can be sent from either client to server or from server to client in order to indicate that the sender holds a copy of a particular artifact. The format is:</p> <blockquote> | | | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 | <h3>3.6 Igot Cards</h3> <p>An igot card can be sent from either client to server or from server to client in order to indicate that the sender holds a copy of a particular artifact. The format is:</p> <blockquote> <b>igot</b> <i>artifact-id</i> ?<i>flag</i>? </blockquote> <p>The first argument of the igot card is the ID of the artifact that the sender possesses. The receiver of an igot card will typically check to see if it also holds the same artifact and if not it will request the artifact using a gimme card in either the reply or in the next message.</p> <p>If the second argument exists and is "1", then the artifact identified by the first argument is private on the sender and should be ignored unless a "--private" [/help?cmd=sync|sync] is occurring. <h4>3.6.1 Unversioned Igot Cards</h4> <p>Zero or more "uvigot" cards are sent from server to client when synchronizing unversioned content. The format of a uvigot card is as follows: <blockquote> <b>uvigot</b> <i>name mtime hash size</i> </blockquote> <p>The <i>name</i> argument is the name of an unversioned file. The <i>mtime</i> is the last modification time of the unversioned file in seconds since 1970. The <i>hash</i> is the SHA1 hash of the unversioned file content, or "<b>-</b>" if the file has been deleted. The <i>size</i> is the uncompressed size of the file in bytes. <p>When the server sees a "pragma uv-hash" card for which the hash does not match, it sends uvigot cards for every unversioned file that it holds. The client will use this information to figure out which unversioned files need to be synchronized. The server might also send a uvigot card when it receives a uvgimme card but its reply message size is already oversized and hence unable to hold the usual uvfile reply. <p>When a client receives a "uvigot" card, it checks to see if the file needs to be transfered from client to server or from server to client. If a client-to-server transmission is needed, the client schedules that transfer to occur on a subsequent HTTP request. If a server-to-client transfer is needed, then the client sends a "uvgimme" card back to the server to request the file content. <h3>3.7 Gimme Cards</h3> <p>A gimme card is sent from either client to server or from server to client. The gimme card asks the receiver to send a particular artifact back to the sender. The format of a gimme card is this:</p> <blockquote> <b>gimme</b> <i>artifact-id</i> </blockquote> <p>The argument to the gimme card is the ID of the artifact that the sender wants. The receiver will typically respond to a gimme card by sending a file card in its reply or in the next message.</p> <h4>3.7.1 Unversioned Gimme Cards</h4> <p>Sync synchronizing unversioned content, the client may send "uvgimme" cards to the server. A uvgimme card requests that the server send unversioned content to the client. The format of a uvgimme card is as follows: <blockquote> <b>uvgimme</b> <i>name</i> </blockquote> <p>The <i>name</i> is the name of the unversioned file found on the server that the client would like to have. When a server sees a uvgimme card, it normally responses with a uvfile card, though it might also send another uvigot card if the HTTP reply is already oversized. <h3>3.8 Cookie Cards</h3> <p>A cookie card can be used by a server to record a small amount of state information on a client. The server sends a cookie to the client. The client sends the same cookie back to the server on its next request. The cookie card has a single argument which is its payload.</p> |
︙ | ︙ | |||
383 384 385 386 387 388 389 | <blockquote> <b>reqconfig</b> <i>configuration-name</i> </blockquote> <p>As of [/timeline?r=trunk&c=2015-03-19+03%3A57%3A46&n=20|2015-03-19], the configuration-name must be one of the following values: | | | 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 | <blockquote> <b>reqconfig</b> <i>configuration-name</i> </blockquote> <p>As of [/timeline?r=trunk&c=2015-03-19+03%3A57%3A46&n=20|2015-03-19], the configuration-name must be one of the following values: <table border=0 align="center"> <tr><td valign="top"> <ul> <li> css <li> header <li> footer <li> logo-mimetype <li> logo-image |
︙ | ︙ | |||
435 436 437 438 439 440 441 | <li> ticket-title-expr <li> ticket-closed-expr <li> @reportfmt <li> @user <li> @concealed <li> @shun </ul></td></tr> | | | | 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 | <li> ticket-title-expr <li> ticket-closed-expr <li> @reportfmt <li> @user <li> @concealed <li> @shun </ul></td></tr> </table> <p>New configuration-names are likely to be added in future releases of Fossil. If the server receives a configuration-name that it does not understand, the entire reqconfig card is silently ignored. The reqconfig card might also be ignored if the user lacks sufficient privilege to access the requested information. <p>The configuration-names that begin with an alphabetic character refer to values in the "config" table of the server database. For example, the "logo-image" configuration item refers to the project logo image that is configured on the Admin page of the [./webui.wiki | web-interface]. The value of the configuration item is returned to the client using a "config" card. <p>If the configuration-name begins with "@", that refers to a class of values instead of a single value. The content of these configuration items is returned in a "config" card that contains pure SQL text that is intended to be evaluated by the client. <p>The @user and @concealed configuration items contain sensitive information and are ignored for clients without sufficient privilege. <h3>3.10 Configuration Cards</h3> |
︙ | ︙ | |||
476 477 478 479 480 481 482 | <p>The server will only accept a config card if the user has "Admin" privilege. A client will only accept a config card if it had sent a corresponding reqconfig card in its request. <p>The content of the configuration item is used to overwrite the corresponding configuration data in the receiver. | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | < < < < < | | 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 | <p>The server will only accept a config card if the user has "Admin" privilege. A client will only accept a config card if it had sent a corresponding reqconfig card in its request. <p>The content of the configuration item is used to overwrite the corresponding configuration data in the receiver. <h3>3.11 Pragma Cards</h3> <p>The client may try to influence the behavior of the server by issuing a pragma card: <blockquote> <b>pragma</i> <i>name value...</i> </blockquote> <p>The "pragma" card has at least one argument which is the pragma name. The pragma name defines what the pragma does. A pragma might have zero or more "value" arguments depending on the pragma name. <p>New pragma names may be added to the protocol from time to time in order to enhance the capabilities of Fossil. Unknown pragmas are silently ignored, for backwards compatibility. <p>The following are the known pragma names as of 2016-08-03: <ol> <li><p><b>send-private</b> <p>The send-private pragma instructs the server to send all of its private artifacts to the client. The server will only obey this request if the user has the "x" or "Private" privilege. <li><p><b>send-catalog</b> <p>The send-catalog pragma instructs the server to transmit igot cards for every known artifact. This can help the client and server to get back in synchronization after a prior protocol error. The "--verily" option to the [/help?cmd=sync|fossil sync] command causes the send-catalog pragma to be transmitted.</p> <li><p><b>uv-hash</b> <i>HASH</i> <p>The uv-hash pragma is sent from client to server to provoke a synchronization of unversioned content. The <i>HASH</i> is a SHA1 hash of the names, modification times, and individual hashes of all unversioned files on the client. If the unversioned content hash from the client does not match the unversioned content hash on the server, then the server will reply with either a "pragma uv-push-ok" or "pragma uv-pull-only" card followed by one "uvigot" card for each unversioned file currently held on the server. The collection of "uvigot" cards sent in response to a "uv-hash" pragma is called the "unversioned catalog". The client will used the unversioned catalog to figure out which files (if any) need to be synchronized between client and server and send appropriate "uvfile" or "uvgimme" cards on the next HTTP request.</p> <p>If a client sends a uv-hash pragma and does not receive back either a uv-pull-only or uv-push-ok pragma, that means that the content on the server exactly matches the content on the client and no further synchronization is required. <li><p><b>uv-pull-only</b></i> <p>A server sends the uv-pull-only pragma to the client in response to a uv-hash pragma with a mismatched content hash argument. This pragma indicates that there are differences in unversioned content between the client and server but that content can only be transfered from server to client. The server is unwilling to accept content from the client because the client login lacks the "write-unversioned" permission.</p> <li><p><b>uv-push-ok</b></i> <p>A server sends the uv-push-ok pragma to the client in response to a uv-hash pragma with a mismatched content hash argument. This pragma indicates that there are differences in unversioned content between the client and server and that content can only be transfered in either direction. The server is willing to accept content from the client because the client login has the "write-unversioned" permission.</p> </ol> <h3>3.12 Comment Cards</h3> <p>Any card that begins with "#" (ASCII 0x23) is a comment card and is silently ignored.</p> <h3>3.13 Error Cards</h3> <p>If the server discovers anything wrong with a request, it generates an error card in its reply. When the client sees the error card, it displays an error message to the user and aborts the sync operation. An error card looks like this:</p> <blockquote> <b>error</b> <i>error-message</i> </blockquote> <p>The error message is English text that is encoded in order to be a single token. A space (ASCII 0x20) is represented as "\s" (ASCII 0x5C, 0x73). A newline (ASCII 0x0a) is "\n" (ASCII 0x6C, x6E). A backslash (ASCII 0x5C) is represented as two backslashes "\\". Apart from space and newline, no other whitespace characters nor any unprintable characters are allowed in the error message.</p> <h3>3.14 Unknown Cards</h3> <p>If either the client or the server sees a card that is not described above, then it generates an error and aborts.</p> <h2>4.0 Phantoms And Clusters</h2> <p>When a repository knows that an artifact exists and knows the ID of |
︙ | ︙ | |||
648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 | <p>A sync is just a pull and a push that happen at the same time. The first three steps of a pull are combined with the first five steps of a push. Steps (4) through (7) of a pull are combined with steps (5) through (8) of a push. And steps (8) through (10) of a pull are combined with step (9) of a push.</p> <h2>6.0 Summary</h2> <p>Here are the key points of the synchronization protocol:</p> <ol> <li>The client sends one or more PUSH HTTP requests to the server. The request and reply content type is "application/x-fossil". <li>HTTP request content is compressed using zlib. <li>The content of request and reply consists of cards with one | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > > > > > | > > | > | 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 | <p>A sync is just a pull and a push that happen at the same time. The first three steps of a pull are combined with the first five steps of a push. Steps (4) through (7) of a pull are combined with steps (5) through (8) of a push. And steps (8) through (10) of a pull are combined with step (9) of a push.</p> <h3>5.4 Unversioned File Sync</h3> <p>"Unversioned files" are files held in the repository where only the most recent version of the file is kept rather than the entire change history. Unversioned files are intended to be used to store ephemeral content, such as compiled binaries of the most recent release. <p>Unversioned files are identified by name and timestamp (mtime). Only the most recent version of each file (the version with the largest mtime value) is retained. <p>Unversioned files are synchronized using the [/help?cmd=unversioned|fossil unversioned sync] command. <p>A schematic of an unversioned file synchronization is as follows: <ol> <li>The client sends a "pragma uv-hash" card to the server. The argument to the uv-hash pragma is a hash of all filesnames, mtimes, and content hashes for the unversioned files held by the client. <hr> <li>If the unversioned content hash from the client matches the unversioned content hash on the server, then nothing needs to be done and the server no-ops. But if the hashes are different, then the server replies with either a uv-pull-only or a uv-push-ok pragma followed by uvigot cards for all unversioned files held on the server. <hr> <li>The client examines the uvigot cards received from the server and determines which unversioned files need to be exchanged in order to bring the client and server into synchronization. The client then sends appropriate "uvgimme" or "uvfile" cards back to the server. <hr> <li>The server updates its unversioned file store with received "uvfile" cards and answers "uvgimme" cards with "uvfile" cards in its reply. </ol> <p>The last two steps might be repeated multiple times if there is more unversioned content to be transferred than will fit comfortably in a single HTTP request. <h2>6.0 Summary</h2> <p>Here are the key points of the synchronization protocol:</p> <ol> <li>The client sends one or more PUSH HTTP requests to the server. The request and reply content type is "application/x-fossil". <li>HTTP request content is compressed using zlib. <li>The content of request and reply consists of cards with one card per line. <li>Card formats are: <ul> <li> <b>login</b> <i>userid nonce signature</i> <li> <b>push</b> <i>servercode projectcode</i> <li> <b>pull</b> <i>servercode projectcode</i> <li> <b>clone</b> <li> <b>clone_seqno</b> <i>sequence-number</i> <li> <b>file</b> <i>artifact-id size</i> <b>\n</b> <i>content</i> <li> <b>file</b> <i>artifact-id delta-artifact-id size</i> <b>\n</b> <i>content</i> <li> <b>cfile</b> <i>artifact-id size</i> <b>\n</b> <i>content</i> <li> <b>cfile</b> <i>artifact-id delta-artifact-id size</i> <b>\n</b> <i>content</i> <li> <b>uvfile</b> <i>name mtime hash size flags</i> <b>\n</b> <i>content</i> <li> <b>private</b> <li> <b>igot</b> <i>artifact-id</i> ?<i>flag</i>? <li> <b>uvigot</b> <i>name mtime hash size</i> <li> <b>gimme</b> <i>artifact-id</i> <li> <b>uvgimme</b> <i>name</i> <li> <b>cookie</b> <i>cookie-text</i> <li> <b>reqconfig</b> <i>parameter-name</i> <li> <b>config</b> <i>parameter-name size</i> <b>\n</b> <i>content</i> <li> <b>pragma</b> <i>name</i> <i>value...</i> <li> <b>error</b> <i>error-message</i> <li> <b>#</b> <i>arbitrary-text...</i> </ul> <li>Phantoms are artifacts that a repository knows exist but does not possess. <li>Clusters are artifacts that contain IDs of other artifacts. <li>Clusters are created automatically on the server during a pull. <li>Repositories keep track of all artifacts that are not named in any cluster and send igot messages for those artifacts. <li>Repositories keep track of all the phantoms they hold and send gimme messages for those artifacts. </ol> |
Changes to www/tech_overview.wiki.
1 2 3 4 5 6 7 8 9 | <title>Technical Overview</title> <h2 align="center"> A Technical Overview<br>Of The Design And Implementation<br>Of Fossil </h2> <h2>1.0 Introduction</h2> At its lowest level, a Fossil repository consists of an unordered set of immutable "artifacts". You might think of these artifacts as "files", | | | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 | <title>Technical Overview</title> <h2 align="center"> A Technical Overview<br>Of The Design And Implementation<br>Of Fossil </h2> <h2>1.0 Introduction</h2> At its lowest level, a Fossil repository consists of an unordered set of immutable "artifacts". You might think of these artifacts as "files", since in many cases the artifacts are exactly that. But other "control artifacts" are also included in the mix. These control artifacts define the relationships between artifacts - which files go together to form a particular version of the project, who checked in that version and when, what was the check-in comment, what wiki pages are included with the project, what are the edit histories of each wiki page, what bug reports or tickets are included, who contributed to the evolution of each ticket, and so forth. This low-level file format is called the "global state" of the repository, since this is the information that is synced to peer repositories using push and pull operations. The low-level file format is also called "enduring" since it is intended to last for many years. The details of the low-level, enduring, global file format are [./fileformat.wiki | described separately]. This article is about how Fossil is currently implemented. Instead of dealing with vague abstractions of "enduring file formats" as the [./fileformat.wiki | other document] does, this article provides some detail on how Fossil actually stores information on disk. <h2>2.0 Three Databases</h2> Fossil stores state information in [http://www.sqlite.org/ | SQLite] database files. SQLite keeps an entire relational database, including multiple tables and indices, in a single disk file. The SQLite library allows the database files to be efficiently queried and updated using the industry-standard SQL language. SQLite updates are atomic, so even in the event of a system crashes or power failure the repository content is protected. Fossil uses three separate classes of SQLite databases: <ol> <li>The configuration database <li>Repository databases <li>Checkout databases </ol> The configuration database is a one-per-user database that holds global configuration information used by Fossil. There is one repository database per project. The repository database is the file that people are normally referring to when they say "a Fossil repository". The checkout database is found in the working checkout for a project and contains state information that is unique to that working checkout. Fossil does not always use all three database files. The web interface, for example, typically only uses the repository database. And the [/help/all | fossil setting] command only opens the configuration database when the --global option is used. But other commands use all three databases at once. For example, the [/help/status | fossil status] command will first locate the checkout database, then use the checkout database to find the repository database, then open the configuration database. Whenever multiple databases are used at the same time, they are all opened on the same SQLite database connection using SQLite's [http://www.sqlite.org/lang_attach.html | ATTACH] command. The chart below provides a quick summary of how each of these database files are used by Fossil, with detailed discussion following. <table border="1" width="80%" cellpadding="0" align="center"> <tr> <td width="33%" valign="top"> <h3 align="center">Configuration Database<br>"~/.fossil"</h3> <ul> <li>Global [/help/setting |settings] <li>List of active repositories used by the [/help/all | all] command </ul> |
︙ | ︙ | |||
101 102 103 104 105 106 107 | local edits <li>The "[/help/stash | stash]" <li>Information needed to "[/help/undo|undo]" or "[/help/redo|redo]" </ul> </td> </tr> </table> | < | 101 102 103 104 105 106 107 108 109 110 111 112 113 114 | local edits <li>The "[/help/stash | stash]" <li>Information needed to "[/help/undo|undo]" or "[/help/redo|redo]" </ul> </td> </tr> </table> <h3>2.1 The Configuration Database</h3> The configuration database holds cross-repository preferences and a list of all repositories for a single user. The [/help/setting | fossil setting] command can be used to specify various |
︙ | ︙ | |||
133 134 135 136 137 138 139 | LOCALAPPDATA, APPDATA, or HOMEPATH environment variables, in that order. You can override this default location by defining the environment variable FOSSIL_HOME pointing to an appropriate (writable) directory. <h3>2.2 Repository Databases</h3> | | | | | | | 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 | LOCALAPPDATA, APPDATA, or HOMEPATH environment variables, in that order. You can override this default location by defining the environment variable FOSSIL_HOME pointing to an appropriate (writable) directory. <h3>2.2 Repository Databases</h3> The repository database is the file that is commonly referred to as "the repository". This is because the repository database contains, among other things, the complete revision, ticket, and wiki history for a project. It is customary to name the repository database after then name of the project, with a ".fossil" suffix. For example, the repository database for the self-hosting Fossil repository is called "fossil.fossil" and the repository database for SQLite is called "sqlite.fossil". <h4>2.2.1 Global Project State</h4> The bulk of the repository database (typically 75 to 85%) consists of the artifacts that comprise the [./fileformat.wiki | enduring, global, shared state] of the project. The artifacts are stored as BLOBs, compressed using [http://www.zlib.net/ | zlib compression] and, where applicable, using [./delta_encoder_algorithm.wiki | delta compression]. The combination of zlib and delta compression results in a considerable space savings. For the SQLite project, at the time of this writing, the total size of all artifacts is over 2.0 GB but thanks to the combined zlib and delta compression, that content only takes up 32 MB of space in the repository database, for a compression ratio of about 64:1. The average size of a content BLOB in the database is around 500 bytes. Note that the zlib and delta compression is not an inherent part of the Fossil file format; it is just an optimization. The enduring file format for Fossil is the unordered set of artifacts. The compression techniques are just a detail of how the current implementation of Fossil happens to store these artifacts efficiently on disk. All of the original uncompressed and undeltaed artifacts can be extracted from a Fossil repository database using the [/help/deconstruct | fossil deconstruct] command. Individual artifacts can be extracted using the [/help/artifact | fossil artifact] command. When accessing the repository database using raw SQL and the [/help/sqlite3 | fossil sql] command, the extension function "<tt>content()</tt>" with a single argument which is the SHA1 hash of an artifact will return the complete undeleted and uncompressed content of that artifact. Going the other way, the [/help/reconstruct | fossil reconstruct] command will scan a directory hierarchy and add all files found to a new repository database. The [/help/import | fossil import] command works by reading the input git-fast-export stream and using it to construct corresponding artifacts which are then written into the repository database. |
︙ | ︙ | |||
223 224 225 226 227 228 229 | this information (and the user credentials and privileges too) is local to each repository database; it is not shared between repositories by [/help/sync | fossil sync]. That is because it is entirely reasonable that two different websites for the same project might have completely different display preferences and user communities. One instance of the project might be a fork of the other, for example, which pulls from the other but never pushes and extends the project in ways that the keepers of | | | | | | | | 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 | this information (and the user credentials and privileges too) is local to each repository database; it is not shared between repositories by [/help/sync | fossil sync]. That is because it is entirely reasonable that two different websites for the same project might have completely different display preferences and user communities. One instance of the project might be a fork of the other, for example, which pulls from the other but never pushes and extends the project in ways that the keepers of the other website disapprove of. Display and processing information includes the following: * The name and description of the project * The CSS file, header, and footer used by all web pages * The project logo image * Fields of tickets that are considered "significant" and which are therefore collected from artifacts and made available for display * Templates for screens to view, edit, and create tickets * Ticket report formats and display preferences * Local values for [/help/setting | settings] that override the global values defined in the per-user configuration database. Though the display and processing preferences do not move between repository instances using [/help/sync | fossil sync], this information can be shared between repositories using the [/help/config | fossil config push] and [/help/config | fossil config pull] commands. The display and processing information is also copied into new repositories when they are created using [/help/clone | fossil clone]. <h4>2.2.4 User Credentials And Privileges</h4> Just because two development teams are collaborating on a project and allow push and/or pull between their repositories does not mean that they trust each other enough to share passwords and access privileges. Hence the names and emails and passwords and privileges of users are considered private information that is kept locally in each repository. Each repository database has a table holding the username, privileges, and login credentials for users authorized to interact with that particular database. In addition, there is a table named "concealed" that maps the SHA1 hash of each users email address back into their true email address. The concealed table allows just the SHA1 hash of email addresses to be stored in tickets, and thus prevents actual email addresses from falling into the hands of spammers who happen to clone the repository. The content of the user and concealed tables can be pushed and pulled using the [/help/config | fossil config push] and [/help/config | fossil config pull] commands with the "user" and "email" as the AREA argument, but only if you have administrative privileges on the remote repository. <h4>2.2.5 Shunned Artifact List</h4> The set of canonical artifacts for a project - the global state for the project - is intended to be an append-only database. In other words, new artifacts can be added but artifacts can never be removed. But it sometimes happens that inappropriate content is mistakenly or maliciously added to a repository. The only way to get rid of the undesired content is to [./shunning.wiki | "shun"] it. The "shun" table in the repository database records the SHA1 hash of all shunned artifacts. The shun table can be pushed or pulled using the [/help/config | fossil config] command with the "shun" AREA argument. The shun table is also copied during a [/help/clone | clone]. <h3>2.3 Checkout Databases</h3> |
︙ | ︙ |
Changes to www/th1.md.
︙ | ︙ | |||
162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 | * regexp * reinitialize * render * repository * searchable * setParameter * setting * styleHeader * styleFooter * tclEval * tclExpr * tclInvoke * tclIsSafe * tclMakeSafe * tclReady * trace | > > | | 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 | * regexp * reinitialize * render * repository * searchable * setParameter * setting * stime * styleHeader * styleFooter * tclEval * tclExpr * tclInvoke * tclIsSafe * tclMakeSafe * tclReady * trace * unversioned content * unversioned list * utime * verifyCsrf * wiki Each of the commands above is documented by a block comment above their implementation in the th\_main.c or th\_tcl.c source files. |
︙ | ︙ | |||
247 248 249 250 251 252 253 | * decorate STRING Renders STRING as wiki content; however, only links are handled. No other markup is processed. <a name="dir"></a>TH1 dir Command | | | 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 | * decorate STRING Renders STRING as wiki content; however, only links are handled. No other markup is processed. <a name="dir"></a>TH1 dir Command --------------------------------- * dir CHECKIN ?GLOB? ?DETAILS? Returns a list containing all files in CHECKIN. If GLOB is given only the files matching the pattern GLOB within CHECKIN will be returned. If DETAILS is non-zero, the result will be a list-of-lists, with each element containing at least three elements: the file name, the file |
︙ | ︙ | |||
398 399 400 401 402 403 404 | * linecount STRING MAX MIN Returns one more than the number of \n characters in STRING. But never returns less than MIN or more than MAX. <a name="markdown"></a>TH1 markdown Command | | | | 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 | * linecount STRING MAX MIN Returns one more than the number of \n characters in STRING. But never returns less than MIN or more than MAX. <a name="markdown"></a>TH1 markdown Command ------------------------------------------- * markdown STRING Renders the input string as markdown. The result is a two-element list. The first element contains the body, rendered as HTML. The second element is the text-only title string. <a name="puts"></a>TH1 puts Command ----------------------------------- * puts STRING Outputs the STRING unchanged. <a name="query"></a>TH1 query Command ------------------------------------- * query ?-nocomplain? SQL CODE Runs the SQL query given by the SQL argument. For each row in the result set, run CODE. In SQL, parameters such as $var are filled in using the value of variable "var". Result values are stored in variables with the column name prior to each invocation of CODE. |
︙ | ︙ | |||
515 516 517 518 519 520 521 522 523 524 525 526 527 528 | <a name="setting"></a>TH1 setting Command ----------------------------------------- * setting name Gets and returns the value of the specified setting. <a name="styleHeader"></a>TH1 styleHeader Command ------------------------------------------------- * styleHeader TITLE Render the configured style header. | > > > > > > > > | 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 | <a name="setting"></a>TH1 setting Command ----------------------------------------- * setting name Gets and returns the value of the specified setting. <a name="stime"></a>TH1 stime Command ------------------------------------- * stime Returns the number of microseconds of CPU time consumed by the current process in system space. <a name="styleHeader"></a>TH1 styleHeader Command ------------------------------------------------- * styleHeader TITLE Render the configured style header. |
︙ | ︙ | |||
575 576 577 578 579 580 581 | * tclIsSafe Returns non-zero if the Tcl interpreter is "safe". The Tcl interpreter will be created automatically if it has not been already. <a name="tclMakeSafe"></a>TH1 tclMakeSafe Command | | | 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 | * tclIsSafe Returns non-zero if the Tcl interpreter is "safe". The Tcl interpreter will be created automatically if it has not been already. <a name="tclMakeSafe"></a>TH1 tclMakeSafe Command ------------------------------------------------- **This command requires the Tcl integration feature.** * tclMakeSafe Forces the Tcl interpreter into "safe" mode by removing all "unsafe" commands and variables. This operation cannot be undone. The Tcl |
︙ | ︙ | |||
601 602 603 604 605 606 607 | <a name="trace"></a>TH1 trace Command ------------------------------------- * trace STRING Generates a TH1 trace message if TH1 tracing is enabled. | | | > | > > > > > | > | > > | 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 | <a name="trace"></a>TH1 trace Command ------------------------------------- * trace STRING Generates a TH1 trace message if TH1 tracing is enabled. <a name="unversioned_content"></a>TH1 unversioned content Command ----------------------------------------------------------------- * unversioned content FILENAME Attempts to locate the specified unversioned file and return its contents. An error is generated if the repository is not open or the unversioned file cannot be found. <a name="unversioned_list"></a>TH1 unversioned list Command ----------------------------------------------------------- * unversioned list Returns a list of the names of all unversioned files held in the local repository. An error is generated if the repository is not open. <a name="utime"></a>TH1 utime Command ------------------------------------- * utime Returns the number of microseconds of CPU time consumed by the current |
︙ | ︙ |
Changes to www/theory1.wiki.
︙ | ︙ | |||
14 15 16 17 18 19 20 | because Fossil is a distributed NoSQL database. And, Fossil does use a modern high-level language for its implementation, namely SQL. <h2>Fossil Is A NoSQL Database</h2> We begin with the first question: Fossil is not based on a distributed NoSQL database because Fossil <u><i>is</i></u> a distributed NoSQL database. | | | | 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 | because Fossil is a distributed NoSQL database. And, Fossil does use a modern high-level language for its implementation, namely SQL. <h2>Fossil Is A NoSQL Database</h2> We begin with the first question: Fossil is not based on a distributed NoSQL database because Fossil <u><i>is</i></u> a distributed NoSQL database. Fossil is <u>not</u> based on SQLite. The current implementation of Fossil uses SQLite as a local store for the content of the distributed database and as a cache for meta-information about the distributed database that is precomputed for quick and easy presentation. But the use of SQLite in this role is an implementation detail and is not fundamental to the design. Some future version of Fossil might do away with SQLite and substitute a pile-of-files or a key/value database in place of SQLite. (Actually, that is very unlikely to happen since SQLite works amazingly well in its current role, but the point is that omitting SQLite from Fossil is a theoretical possibility.) The underlying database that Fossil implements has nothing to do with SQLite, or SQL, or even relational database theory. The underlying database is very simple: it is an unordered collection of "artifacts". |
︙ | ︙ | |||
62 63 64 65 66 67 68 | So really, Fossil works with two separate databases. There is the bag-of-artifacts database which is non-relational and distributed (like a NoSQL database) and there is the local relational database. The bag-of-artifacts database has a fixed format and is what defines a Fossil repository. Fossil will never modify the file format of the bag-of-artifacts database in an incompatible way because to do so would be to make something that is no longer "Fossil". The local relational database, on the other hand, | | | 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 | So really, Fossil works with two separate databases. There is the bag-of-artifacts database which is non-relational and distributed (like a NoSQL database) and there is the local relational database. The bag-of-artifacts database has a fixed format and is what defines a Fossil repository. Fossil will never modify the file format of the bag-of-artifacts database in an incompatible way because to do so would be to make something that is no longer "Fossil". The local relational database, on the other hand, is a cache that contains information derived from the bag-of-artifacts. The schema of the local relational database changes from time to time as the Fossil implementation is enhanced, and the content is recomputed from the unchanging bag of artifacts. The local relational database is an implementation detail which currently happens to use SQLite. Another way to think of the relational tables in a Fossil repository is as an index for the artifacts. Without the relational tables, |
︙ | ︙ | |||
87 88 89 90 91 92 93 | And Fossil doesn't use a distributed NoSQL database because Fossil is a distributed NoSQL database. That answers the first question. <h2>SQL Is A High-Level Scripting Language</h2> The second concern states that Fossil does not use a high-level scripting | | | 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 | And Fossil doesn't use a distributed NoSQL database because Fossil is a distributed NoSQL database. That answers the first question. <h2>SQL Is A High-Level Scripting Language</h2> The second concern states that Fossil does not use a high-level scripting language. But that is not true. Fossil uses SQL (as implemented by SQLite) as its scripting language. This misunderstanding likely arises because people fail to appreciate that SQL is a programming language. People are taught that SQL is a "query language" as if that were somehow different from a "programming language". But they really are two different flavors of the same thing. I find that people do better with SQL if they think of |
︙ | ︙ | |||
123 124 125 126 127 128 129 | Much of the "heavy lifting" within the Fossil implementation is carried out using SQL statements. It is true that these SQL statements are glued together with C code, but it turns out that C works surprisingly well in that role. Several early prototypes of Fossil were written in a scripting language (TCL). We normally find that TCL programs are shorter than the equivalent C code by a factor of 10 or more. But in the case of Fossil, | | | 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 | Much of the "heavy lifting" within the Fossil implementation is carried out using SQL statements. It is true that these SQL statements are glued together with C code, but it turns out that C works surprisingly well in that role. Several early prototypes of Fossil were written in a scripting language (TCL). We normally find that TCL programs are shorter than the equivalent C code by a factor of 10 or more. But in the case of Fossil, the use of TCL was actually making the code longer and more difficult to understand. And so in the final design, we switched from TCL to C in order to make the code easier to implement and debug. Without the advantages of having SQLite built in, the design might well have followed a different path. Most reports generated by Fossil involve a complex set of queries against the relational tables of the repository database. These queries are normally implemented in only a few dozen lines of SQL code. But if those queries had been implemented procedurally using a key/value or pile-of-files database, it may have well been the case that a high-level scripting language such as Tcl, Python, or Ruby may have worked out better than C. |
Added www/unvers.wiki.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 | <title>Unversioned Content</title> <h1 align="center">Unversioned Content</h1> "Unversioned content" or "unversioned files" are files stored in a Fossil repository without history. Only the newest version of each unversioned file is retained. Though history is omitted, unversioned content is synced between respositories. In the event of a conflict during a sync, the most recent version of each unversioned file is retained and older versions are discarded. Unversioned files are useful for storing ephemeral content such as builds or frequently changing web pages. The [https://www.fossil-scm.org/fossil/uv/download.html|download] page of the self-hosting Fossil repository is stored as unversioned content, for example. <h2>Accessing Unversioned Files</h2> Unversioned files are <u>not</u> a part of a check-out. Unversioned files are intended to be accessible as web pages using URLs of the form: "http://domain/cgi-script/<b>uv</b>/<i>FILENAME</i>". In other words, the URI method "<b>uv</b>" (short for "unversioned") followed by the name of the unversioned file will retrieve the content of the file. The mimetype is inferred from the filename suffix. The content of unversioned files can also be retrieved using the [/help?cmd=unversioned|fossil unvers cat <i>FILENAME</i>] command. A list of all unversioned files on a server can be seen using the [/help?cmd=/uvlist|/uvlist] URL. ([/uvlist|example]). <h2>Syncing Unversioned Files</h2> Unversioned content is synced between respositories, though not by default. Special commands or command-line options are required. Unversioned content can be synced using the following commands: <blockquote><pre> fossil sync <b>-u</b> fossil clone <b>-u</b> <i>URL local-repo-name</i> fossil unversioned sync </pre></blockquote> The [/help?cmd=sync|fossil sync] and [/help?cmd=clone|fossil clone] commands will synchronize unversioned content if and only if the "-u" (or "--unversioned") command-line option is supplied. The [/help?cmd=unversioned|fossil unversioned sync] command will synchronize the unversioned content without synchronizing anything else. Notice that the "-u" option does not work on [/help?cmd=push|fossil push] or [/help?cmd=pull|fossil pull]. The "-u" option is only available on "sync" and "clone". A rough equivalent of an unversioned pull would be the [/help?cmd=unversioned|fossil unversioned revert] command. The "unversioned revert" command causes the unversioned content on the local repository to overwritten by the unversioned content found on the remote repository. <h2>Implementation Details</h2> <i>(This section outlines the current implementation of unversioned files. This is not an interface spec and hence subject to change.)</i> Unversioned content is stored in the repository in the "unversioned" table: <blockquote><pre> CREATE TABLE unversioned( uvid INTEGER PRIMARY KEY AUTOINCREMENT, -- unique ID for this file name TEXT UNIQUE, -- Name of the file rcvid INTEGER, -- From whence this file was received mtime DATETIME, -- Last change (seconds since 1970) hash TEXT, -- SHA1 hash of uncompressed content sz INTEGER, -- Size of uncompressed content encoding INT, -- 0: plaintext 1: zlib compressed content BLOB -- File content ); </pre></blockquote> If there are no unversioned files in the repository, then the "unversioned" table does not necessarily exist. A simple way to purge all unversioned content from a repository is to run: <blockquote><pre> fossil sql "DROP TABLE unversioned; VACUUM;" </pre></blockquote> No delta compression is performed on unversioned files, since there is no history to delta against. Unversioned content is exchanged between servers as whole, uncompressed files (though the content does get compressed when the overall HTTP message payload is compressed). SHA1 hash exchanges are used to avoid sending content over the wire unnecessarily. See the [./sync.wiki|synchronization protocol documentation] for further information. |
Changes to www/webpage-ex.md.
︙ | ︙ | |||
62 63 64 65 66 67 68 69 70 71 72 73 74 75 | href='$ROOT/timeline?namechng'>Example</a> Show check-ins that contain file name changes * <a target='_blank' class='exbtn' href='$ROOT/timeline?u=drh&c=2014-01-08&y=ci'>Example</a> Show check-ins circa 2014-01-08 by user "drh". <big><b>→</b></big> (Hint: In the pages above, click the graph nodes for any two check-ins or files to see a diff.) <big><b>←</b></big> * <a target='_blank' class='exbtn' href='$ROOT/search?s=interesting+pages'>Example</a> Full-text search for "interesting pages". | > > > > > | 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 | href='$ROOT/timeline?namechng'>Example</a> Show check-ins that contain file name changes * <a target='_blank' class='exbtn' href='$ROOT/timeline?u=drh&c=2014-01-08&y=ci'>Example</a> Show check-ins circa 2014-01-08 by user "drh". * <a target='_blank' class='exbtn' href='$ROOT/timeline?from=version-1.34&to=version-1.35&chng=src/timeline.c,src/doc.c'>Example</a> Show all check-ins between version-1.34 and version-1.35 that make changes to either of the files src/timeline.c or src/doc.c. <big><b>→</b></big> (Hint: In the pages above, click the graph nodes for any two check-ins or files to see a diff.) <big><b>←</b></big> * <a target='_blank' class='exbtn' href='$ROOT/search?s=interesting+pages'>Example</a> Full-text search for "interesting pages". |
︙ | ︙ | |||
117 118 119 120 121 122 123 | * <a target='_blank' class='exbtn' href='$ROOT/bigbloblist'>Example</a> The largest objects in the repository. * <a target='_blank' class='exbtn' href='$ROOT/hash-collisions'>Example</a> SHA1 prefix collisions | > > > > | 122 123 124 125 126 127 128 129 130 131 132 | * <a target='_blank' class='exbtn' href='$ROOT/bigbloblist'>Example</a> The largest objects in the repository. * <a target='_blank' class='exbtn' href='$ROOT/hash-collisions'>Example</a> SHA1 prefix collisions * <a target='_blank' class='exbtn' href='$ROOT/sitemap'>Example</a> The "sitemap" containing links to many other pages |
Changes to www/webui.wiki.
1 2 3 4 5 6 7 8 9 | <title>The Fossil Web Interface</title> One of the innovative features of Fossil is its built-in web interface. This web interface provides everything you need to run a software development project: * [./bugtheory.wiki | Ticketing and bug tracking] * [./wikitheory.wiki | Wiki] * [./embeddeddoc.wiki | On-line documentation] | | > > < | | < | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 | <title>The Fossil Web Interface</title> One of the innovative features of Fossil is its built-in web interface. This web interface provides everything you need to run a software development project: * [./bugtheory.wiki | Ticketing and bug tracking] * [./wikitheory.wiki | Wiki] * [./embeddeddoc.wiki | On-line documentation] * [./event.wiki | Technical notes] * Timelines * Full text search over all of the above * Status information * Graphs of revision and branching history * File and version lists and differences * Download historical versions as ZIP archives * Historical change data * Add and remove tags on check-ins * Move check-ins between branches * Revise check-in comments * Manage user credentials and access permissions * And so forth... (some [./webpage-ex.md|examples]) You get all of this, and more, for free when you use Fossil. There are no extra programs to install or setup. Everything you need is already pre-configured and built into the self-contained, stand-alone Fossil executable. As an example of how useful this web interface can be, the entire [./index.wiki | Fossil website], including the document you are now reading, is rendered using the Fossil web interface, with no enhancements, and little customization. <blockquote> <b>Key point:</b> <i>The Fossil website is just a running instance of Fossil! |
︙ | ︙ | |||
50 51 52 53 54 55 56 | To start using the built-in Fossil web interface on an existing Fossil repository, simply type this: <b>fossil ui existing-repository.fossil</b> Substitute the name of your repository, of course. The "ui" command will start a webserver running (it figures out an | | | 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 | To start using the built-in Fossil web interface on an existing Fossil repository, simply type this: <b>fossil ui existing-repository.fossil</b> Substitute the name of your repository, of course. The "ui" command will start a webserver running (it figures out an available TCP port to use on its own) and then automatically launches your web browser to point at that server. If you run the "ui" command from within an open check-out, you can omit the repository name: <b>fossil ui</b> The latter case is a very useful short-cut when you are working on a Fossil project and you want to quickly do some work with the web interface. |
︙ | ︙ | |||
148 149 150 151 152 153 154 | #!/usr/local/bin/fossil repository: /home/www/sample-project.fossil </verbatim> Adjust the script above so that the paths are correct for your system, of course, and also make sure the Fossil binary is installed on the server. But that is <u>all</u> you have to do. You now have everything you need to host | | | | 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 | #!/usr/local/bin/fossil repository: /home/www/sample-project.fossil </verbatim> Adjust the script above so that the paths are correct for your system, of course, and also make sure the Fossil binary is installed on the server. But that is <u>all</u> you have to do. You now have everything you need to host a distributed software development project in less than five minutes using a two-line CGI script. Instructions for setting up an SCGI server are [./scgi.wiki | available separately]. You don't have a CGI- or SCGI-capable web server running on your server machine? Not a problem. The Fossil interface can also be launched via inetd or xinetd. An inetd configuration line sufficient to launch the Fossil web interface looks like this: <verbatim> 80 stream tcp nowait.1000 root /usr/local/bin/fossil \ /usr/local/bin/fossil http /home/www/sample-project.fossil </verbatim> As always, you'll want to adjust the pathnames to whatever is appropriate for your system. The xinetd setup uses a different syntax but follows the same idea. |
Added www/whyusefossil.wiki.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 | <title>Why Use Fossil</title> <h1 align='center'>Why You Should Use Fossil</h1> <p align='center'><b>Or, if not Fossil, at least some kind of modern version control<br>such as Git, Mercurial, or Subversion.</b></p> <p align='center'>(Presented in outline form, for people in a hurry)</p> <ol> <li><p><b>Benefits of Version Control</b> <ol type='A'> <li><p><b>Immutable file and version identification</b> <ol type='i'> <li>Simplified and unambiguous communication between developers <li>Detect accidental or surreptitious changes <li>Locate the origin of discovered files </ol> <li><p><b>Parallel development</b> <ol type='i'> <li>Multiple developers on the same project <li>Single developer with multiple subprojects <li>Experimental features do not contaminate the main line <li>Development/Testing/Release branches <li>Incorporate external changes into the baseline </ol> <li><p><b>Historical record</b> <ol type='i'> <li>Exactly reconstruct historical builds <li>Locate when and by whom faults were injected <li>Find when and why content was added or removed <li>Team members see the big picture <li>Research the history of project features or subsystems <li>Copyright and patent documentation <li>Regulatory compliance </ol> <li><p><b>Automatic replication and backup</b> <ol type='i'> <li>Everyone always has the latest code <li>Failed disk-drives cause no loss of work <li>Avoid wasting time doing manual file copying <li>Avoid human errors during manual backups </ol> </ol> <a name='definitions'></a> <li><p><b>Definitions</b></p> <ul> <li><p><b>Project</b> → a collection of computer files that serve some common purpose. Often the project is a software application and the individual files are source code together with makefiles, scripts, and "README.txt" files. Other examples of projects include books or manuals in which each chapter or section is held in a separate file. <ul> <li><p>Projects change and evolve. The whole purpose of version control is to track and manage that evolution. <li><p>Most projects contain many files, but it is possible to have a project consisting of just a single file. <li><p>Fossil requires that all the files for a project must be collected into a single directory hierarchy - a single folder possibly with layers of subfolders. Fossil is not a good choice for managing a project that has files scattered hither and yon all over the disk. In other words, Fossil only works for projects where the files are laid out such that they can be archived into a ZIP file or tarball. </ul> <li><p><b>Repository</b> → (also called "repo") a single file that contains all historical versions of all files in a project. A repo is similar to a ZIP archive in that it is a single file that stores compressed versions of many other files. Files can be extracted from the repo and new files can be added to the repo, just as with a ZIP archive. But a repo has other capabilities above and beyond what a ZIP archive can do. <ul> <li><p>Fossil does not care what you name your repository files, though names ending with ".fossil" are recommended. <li><p>A single project typically has multiple, redundant repositories on separate machines. <li><p>All repositories stay synchronized with one another by exchanging information via HTTP or SSH. <li><p>All repos for a single project redundantly store all information about that project. So if any one repo is lost due to a disk crash, all content is preserved in the surviving repos. <li><p>The usual arrangement is one repository per user. And since most users these days have their own computer, that means one repository per computer. But this is not a requirement. It is ok to have multiple copies of the same repository on the same computer. <li><p>Fossil works fine with just a single copy of the repository. But in that case there is no redundancy. If that one repository file is lost due to a hardware malfunction, then there is no way to recover the project. <li><p>Best practice is to keep all repositories for a user in a single folder. Folders such as "~/Fossils" or "%USERPROFILE%\Fossils" are recommended. Fossil itself does not care where the repositories are stored. Nor does Fossil require repositories to be kept in the same folder. But it is easier to organize your work if all repositories are kept in the same place. </ul> <li><p><b>Check-out</b> → a set of files that have been extracted from a repository and that represent a particular version or snapshot of the project. <ul> <li><p>Check-outs must be on the same computer as the repository from which they are extracted. This is just like with a ZIP archive: one must have the ZIP archive file on the local machine before extracting files from ZIP archive. <li><p>There can be multiple check-outs (in different folders) from the same repository. <li><p>The repository must be on the same computer as the check-out, but the relative locations of the repo and the check-out are arbitrary. The repository may be located inside the folder holding the check-out, but it certainly does not have to be and usually is not. <li><p>A special file exists in every check-out that tells Fossil from which repository the check-out was extracted, and which version of the project the check-out represents. This is the ".fslckout" file on unix systems or the "_FOSSIL_" file on Windows. </ul> <li><p><b>Check-in</b> → another name for a particular version of the project. A check-in is a collection of files inside of a repository that represent a snapshot of the project for an instant in time. Check-ins exist only inside of the repository. This contrasts with a check-out which is a collection of files outside of the repository. <ul> <li><p>Every check-out knows the check-in from which it was derived. But check-outs might have been edited and so might not exactly match their associated check-in. <li><p>Check-ins are immutable. They can never be changed. But check-outs are collections of ordinary files on disk. The files of a check-out can be edited just like any other file. <li><p>A check-in can be thought of as an historical snapshot of a check-out. <li><p>"Check-in", "version", "snapshot", and "revision" are synonyms. <li><p> When used as a noun, the word "commit" is another synonym for "check-in". When used as a verb, the word "commit" means to create a new check-in. </ul> </ul> <li><p><b>Basic Fossil commands</b> <ul> <li><p><b>clone</b> → Make a copy of a repository. The original repository is usually (but not always) on a remote machine and the copy is on the local machine. The copy remembers the network location from which it was copied and (by default) tries to keep itself synchronized with the original. <li><p><b>open</b> → Create a new check-out from a repository on the local machine. <li><p><b>update</b> → Modify an existing check-out so that it is derived from a different version of the same project. <li><p><b>commit</b> → Create a new version (a new check-in) of the project that is a snapshot of the current check-out. <li><p><b>revert</b> → Undo all local edits on a check-out. Make the check-out be an exact copy of its associated check-in. <li><p><b>push</b> → Copy content found in a local repository over to a remote repository. (Fossil usually does this automatically in response to a "commit" and so this command is seldom used, but it is important to understand it.) <li><p><b>pull</b> → Copy new content found in a remote repository into a local repository. A "pull" by itself does not modify any check-out. The "pull" command only moves content between repositories. However, the "update" command will (often) automatically do a "pull" before attempting to update the local check-out. <li><p><b>sync</b> → Do both a "push" and a "pull" at the same time. <li><p><b>add</b> → Add a new file to the local check-out. The file must already be on disk. This command tells Fossil to start tracking and managing the file. This command affects only the local check-out and does not modify any repository. The new file is inserted into the repository at the next "commit" command. <li><p><b>rm/mv</b> → Short for 'remove' and 'move', these commands are like "add" in that they specify pending changes to the structure of the check-out. As with "add", no changes are made to the repository until the next "commit". </ul> <li><p><b>The history of a project is a Directed Acyclic Graph (DAG)</b> <ul> <li><p>Fossil (and other distributed VCSes like Git and Mercurial, but not Subversion) represent the history of a project as a directed acyclic graph (DAG). <ul> <li><p>Each check-in is a node in the graph <li><p>If check-in X is derived from check-in Y then there is an arc in the graph from node X to node Y. <li><p>The older check-in (X) is call the "parent" and the newer check-in (Y) is the "child". The child is derived from the parent. </ul> <li><p>Two users (or the same user working in different check-outs) might commit different changes against the same check-in. This results in one parent node having two or more children. <li><p>Command: <b>merge</b> → combines the work of multiple check-ins into a single check-out. That check-out can then be committed to create a new that has two (or more) parents. <ul> <li><p>Most check-ins have just one parent, and either zero or one child. <li><p>When a check-in has two or more parents, one of those parents is the "primary parent". All the other parent nodes are "secondary". Conceptually, the primary parent shows the main line of development. Content from the secondary parents is added into the main line. <li><p>The "direct children" of a check-in X are all children that have X as their primary parent. <li><p>A check-in node with no direct children is sometimes called a "leaf". </ul> <li><p>Definition: <b>branch</b> → a sequence of check-ins that are all linked together in the DAG through the primary parent. <ul> <li><p>Branches are often given names which propagate to direct children. <li><p>It is possible to have multiple branches with the same name. Fossil has no problem with this, but it can be confusing to humans, so best practice is to give each branch a unique name. <li><p>The name of a branch can be changed by adding special tags to the first check-in of a branch. The name assigned by this special tag automatically propagates to all direct children. </ul> </ul> <li><p><b>Why version control is important (reprise)</b> <ol type="A"> <li><p>Every check-in and every individual file has a unique name - its SHA1 hash. Team members can unambiguously identify any specific version of the overall project or any specific version of an individual file. <li><p>Any historical version of the whole project or of any individual file can be easily recreated at any time and by any team member. <li><p>Accidental changes to files can be detected by recomputing their SHA1 hash. <li><p>Files of unknown origin can be identified using their SHA1 hash. <li><p>Developers are able to work in parallel, review each others work, and easily merge their changes together. External revisions to the baseline can be easily incorporated into the latest changes. <li><p>Developers can follow experimental lines of development, then revert back to an earlier stable version if the experiment does not work out. Creativity is enhanced by allowing crazy ideas to be investigated without destabilizing the project. <li><p>Developers can work on several independent subprojects, flipping back and forth from one subproject to another at will, and merge patches together or back into the main line of development as they mature. <li><p>Older changes can be easily backed out of recent revisions, for example if bugs are found long after the code was committed. <li><p>Enhancements in a branch can be easily copied into other branches, or into the trunk. <li><p>The complete history of all changes is plainly visible to all team members. Project leaders can easily keep track of what all team members are doing. Check-in comments help everyone to understand and/or remember the reason for each change. <li><p>New team members can be brought up-to-date with all of the historical code, quickly and easily. <li><p>New developers, interns, or inexperienced staff members who still do not understand all the details of the project or who are otherwise prone to making mistakes can be assigned significant subprojects to be carried out in branches without risking main line stability. <li><p>Code is automatically synchronized across all machines. No human effort is wasted copying files from machine to machine. The risk of human errors during file transfer and backup is eliminated. <li><p>A hardware failure results in minimal lost work because all previously committed changes will have been automatically replicated on other machines. <li><p>The complete work history of the project is conveniently archived in a single file, simplifying long-term record keeping. <li><p>A precise historical record is maintained which can be used to support copyright and patent claims or regulatory compliance. </ol> </ol> |
Changes to www/wikitheory.wiki.
1 2 3 | <title>Wiki In Fossil</title> <h2>Introduction</h2> | | > | | < < < < | < < | < < < < < < < < < < < < < | | | > | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 | <title>Wiki In Fossil</title> <h2>Introduction</h2> Fossil uses [/wiki_rules | Fossil wiki markup] and/or [/md_rules | Markdown markup] for many things: * Stand-alone wiki pages. * Description and comments in [./bugtheory.wiki | bug reports]. * Check-in comments. * [./embeddeddoc.wiki | Embedded documentation] files whose name ends in ".wiki" or ".md" or ".markdown". * [./event.wiki | Technical notes]. The [/wiki_rules | formatting rules for fossil wiki] are designed to be simple and intuitive. The idea is that wiki provides paragraph breaks, numbered and bulleted lists, and hyperlinking for simple documents together with a safe subset of HTML for more complex formatting tasks. The [/md_rules | Markdown formatting rules] are more complex, but are also more widely known, and are thus provided as an alternative. <h2>Stand-alone Wiki Pages</h2> Each wiki page has its own revision history which is independent of the sequence of check-ins (check-ins). Wiki pages can branch and merge just like check-ins, though as of this writing (2008-07-29) there is no mechanism in the user interface to support branching and merging. The current implementation of the wiki shows the version of the wiki page that has the most recent timestamp. In other words, if two users make unrelated changes to the same wiki page on separate repositories and those repositories are synced, the wiki page will fork. The web interface will display whichever edit was checked in last. The other edit can be found in the history. The file format will support merging the branches back together, but there is no mechanism in the user interface (yet) to perform the merge. Every change to a wiki page is a separate [./fileformat.wiki | control artifact] of type [./fileformat.wiki#wikichng | "Wiki Page"]. <h2>Embedded Documentation</h2> Files in the source tree that use the ".wiki", ".md", or ".markdown" suffixes can be accessed and displayed using special URLs to the fossil server. This allows project documentation to be stored in the source tree and accessed online. (Details are described [./embeddeddoc.wiki | separately].) Some projects prefer to store their documentation in wiki. There is nothing wrong with that. But other projects prefer to keep documentation as part of the source tree, so that it is versioned along with the source tree and so that only developers with check-in privileges can change it. Embedded documentation serves this latter purpose. Both forms of documentation use the exact same markup. Some projects may choose to use both forms of documentation at the same time. Because the same format is used, it is trivial to move a file from wiki to embedded documentation or back again as the project evolves. <h2>Bug-reports and check-in comments</h2> The comments on check-ins and the text in the descriptions of bug reports both use wiki formatting. Exactly the same set of formatting rules apply. There is never a need to learn one formatting language for documentation and a different markup for bugs or for check-in comments. |